By OWEN TRIPP
So much of the early energy around generative AI in healthcare has been geared toward speed and efficiency: freeing doctors from admin tasks, automating patient intake, streamlining paperwork-heavy pain points. This is all necessary and helpful, but much of it boils down to established players optimizing the existing system to suit their own needs. As consumers flock to AI for healthcare, their questions and needs highlight the limits of off-the-shelf bots — and the pent-up demand for no judgment, all-in-one, personalized help.
Transforming healthcare so that it actually works for patients and consumers — ahem, people — requires more than incumbent-led efficiency. Generative AI will be game-changing, no doubt, but only when it’s embedded and embraced as a trusted guide that steers people toward high-quality care and empowers them to make better decisions.
Upgrading Dr. Google
From my vantage point, virtual agents and assistants are the most important frontier in healthcare AI right now — and in people-centered healthcare, period. Tens of millions of people (especially younger generations) are already leaning into AI for help with health and wellness, testing the waters of off-the-shelf apps and tools like ChatGPT.
You see, people realize that AI isn’t just for polishing emails and vacation itineraries. One-fifth of adults consult AI chatbots with health questions at least once a month (and given AI’s unprecedented adoption curve, we can assume that number is rising by the day). For most, AI serves as a souped-up, user-friendly alternative to search engines. It offers people a more engaging way to research symptoms, explore potential treatments, and determine if they actually need to see a doctor or head to urgent care.
But people are going a lot deeper with chatbots than they ever did with Dr. Google or WebMD. Beyond the usual self-triage, the numbers tell us that up to 40% of ChatGPT users have consulted AI after a doctor’s appointment. They were looking to verify and validate what they’d heard. Even more surprising, after conferring with ChatGPT, a similar percentage then re-engaged with their doctor — to request referrals or tests, changes to medications, or schedule a follow-up.
These trends highlight AI’s enormous potential as an engagement tool, and they also suggest that people are defaulting to AI because the healthcare system is (still) too difficult and frustrating to navigate. Why are people asking ChatGPT how to manage symptoms? Because accessing primary and preventive care is a challenge. Why are they second-guessing advice and prescriptions? Sadly, they don’t fully trust their doctor, are embarrassed to speak up, or don’t have enough time to talk through their questions and concerns during appointments.
Chatbots have all the time in the world, and they’re responsive, supportive, knowledgeable, and nonjudgmental. This is the essence of the healthcare experience people want, need, and deserve, but that experience can’t be built with chatbots alone. AI has a critical role to play, to be sure, but to fulfill its potential it has to evolve well beyond off-the-shelf chatbot competence.
Chatbots 2.0
When it comes to their healthcare, the people currently flocking to mass-market apps like ChatGPT will inevitably realize diminishing returns. Though the current experience feels personal, the advice and information is ultimately very generic, built on the same foundation of publicly available data, medical journals, websites, and countless other sources. Even the purpose-built healthcare chatbots in the market today are overwhelmingly relying on public data and outsourced AI models.
Generic responses and transactional experiences have inherent shortcomings. As we’ve seen with other health-tech advances, including 1.0 telehealth and navigation platforms, impersonal, one-off services driven primarily by in-the-moment-need, efficiency, or convenience don’t equate to long-term value.
For chatbots to avoid the 1.0 trap, they need to do more than put the world’s medical knowledge at our fingertips.
They need to be connected to the full range of healthcare settings and interactions, including providing access to human experts and relevant next steps that individuals can take while in the flow of getting answers. Creating that experience requires two big things:
The first is personalization. In healthcare, that involves more than just a personified user experience. The most promising use cases for AI — including automated nudges, appointment summaries, automated scheduling and care coordination, and fast answers to benefits and billing questions — depend on having built-in access to individuals’ health benefits and medical records. Without those (private and secure) data connections, the guidance AI provides will never be truly personalized, no matter how engaging the interface. Knowledge alone isn’t enough; with bots as with doctors, feeling seen and heard — and understood and remembered — is critical to building trust.
The second is humans, standing by. As time goes on, AI will be able to handle a greater range of questions and tasks, but human expertise — and clinical expertise, in particular — is an indispensable backstop. Even if chatbots are someday able to autonomously prescribe drugs and tests (as some envision), many essential healthcare interactions will still require the involvement of a human care team. The fusion of artificial and human intelligence — what I call AI+EQ — is exponentially more powerful than either one alone.
Joining forces
Who, exactly, is going to deliver this experience? No one player in healthcare today has all of the necessary capabilities.
OpenAI, Google, and the other companies leading the AI revolution certainly have the technology, but they lack the healthcare connectedness and expertise (including the doctors) required to bring together the clinical, financial, and administrative aspects of healthcare in a single experience. Not to mention, but many tech giants over the years have dipped their toe into healthcare, only to reconsider.
Health systems and health insurance companies certainly have the healthcare expertise, and they’re hard at work incorporating AI into their businesses, but many have lost people’s trust. With AI-powered navigation tools and prior authorizations, insurers already have a track record of disguising cost-control initiatives as “member-centric” services. By the same token, it’s not hard to envision AI tools created by hospitals and health systems that — intentionally or otherwise — would be biased toward high-cost specialty care regardless of appropriateness.
The entity that can deliver the healthcare AI experience people deserve likely doesn’t exist quite yet. It’s probably a partnership — not a single company — that brings purpose-built AI models, clinical expertise, leading healthcare connectedness, system-wide access, and person-specific data under one roof.
People want AI they can trust that actually makes healthcare work for them. They’re open to it, but they can’t build it by themselves.
Owen Tripp is the co-founder and CEO of Included Health, a personalized all-in-one healthcare company.