A digital workforce is coming to healthcare.
Y Combinator calls 2025 the “year of AI agents,” and singled out healthcare as a key focus area. Bill Gates predicts that agents will “upend the software industry, bringing about the biggest revolution in computing since we went from typing commands to tapping on icons.” Nvidia CEO Jensen Huang recently proclaimed that agents “present a trillion-dollar opportunity.”
You get the point. Agents are all the rage throughout the tech industry. But what exactly are agents? What role might they play in healthcare? And what’s holding them back from realizing their potential?
What are AI Agents?
Because early chatbots used decision trees and scripted responses, they struggled with in-depth conversations. With ChatGPT’s release, millions of people discovered how to interact with large language models across an almost unlimited range of topics.
Agents build on that foundation. They are LLMs enhanced with capabilities such as retrieval, memory, and tools—enabling them to carry out narrowly defined tasks without human supervision (e.g., booking a flight or responding to a customer service request). While copilots assist humans, agents take over tasks entirely. Agentic AI coordinates multiple task-specific agents to accomplish multi-step goals.
Why All the Excitement About Healthcare Agents?
Many healthcare provider organizations are struggling. Their workforces are strained and shorthanded. Margins are thin, labor costs are rising, and many processes are inefficient and wasteful. Meanwhile, patients often struggle to access timely, affordable, and effective care.
Numerous industries, such as banking, travel, and personal finance, have improved productivity and service by doing more with fewer people. Agents may unlock similar opportunities in healthcare. Always on, scalable, and tireless, agents could automate a range of administrative and even clinical processes.
That’s why many view agents not just as tech, but as operational infrastructure and digital labor. As Luminai CEO Kesava Kirupa Dinakaran explained, “When you think about how computers can drive value, it’s by improving operations. At their core, healthcare organizations are all about operations.” The hope is that agents can expand access, lower costs, improve experiences, and enhance outcomes.
How are Agents Spreading Throughout Healthcare?
A burgeoning wave of companies is racing to deliver AI agents that enable provider organizations to do more with less.
Call centers are today’s leading use case. Most patients still make appointments by phone, and many skip them due to scheduling hassles. Agents can manage inbound calls by conversing with patients and taking the next best actions (e.g., scheduling an appointment or making a referral). When agents cannot handle an end-to-end call, they can verify key details, summarize the conversations, and hand off to human staff.
Assort Health Co-CEO Jeff Liu explained that his company aims to help healthcare organizations “train the best operators they ever had.” By turning scheduling protocols into rules engines and plugging agents into EHRs, his co-CEO Jon Wang reported that their company automates most inbound calls—routing the rest to call center workers and nurses in priority order.
Similarly, Hello Patient CEO Alex Cohen reports that his company’s “fully generative voice and SMS agents handle real-time front‑office calls and proactively re-engage patients, letting clinics boost access and fill schedules without adding headcount.”
Agents are also handling outbound calls. Notable’s agents prepare patients for clinic visits by verifying their insurance benefits, clarifying (and documenting) their reason for the visit, and checking them in for their appointments. Qventus’ agents help optimize patients for procedures and surgeries by providing preparation instructions, sending appointment reminders, answering general care questions, and coordinating pre-admission testing. Infinitus deploys agents to perform health risk assessments and help patients access specialty therapies and rare disease programs.
Some agents support patients between and after visits. Ambience Healthcare is developing agents that send follow-up instructions, medication reminders, and scheduling prompts based on visit notes. Hippocratic AI provides a constellation of agents for nurse-level clinical tasks, such as making post-discharge follow-up calls and closing care gaps. The company calls this “super staffing” since many organizations lack the staff to do enough of this work on their own. Sword Health, a digital physical therapy platform, uses agents to onboard patients, provide customer service, and even help them return hardware. And Cedar recently launched an AI voice agent, Kora, to handle billing inquiries, explain charges, surface payment options, and connect patients with financial assistance.
Agents are not limited to patient-facing roles—they’re also streamlining the back office. For example, one of Luminai’s agents reads incoming faxes and automatically triggers downstream workflows like refills and referrals. VoiceCare AI automates communication between provider organizations, insurers, and patients. Its CEO, Parag Jhaveri, reported that their agent, Joy, can wait on hold for more than 30 minutes, navigate phone trees, sustain multi-hour conversations, and take actions like updating claims and filing requests.
What Are the Key Technical Challenges?
Building a well-scripted, polished demo is easy. Delivering reliable performance on real-world healthcare tasks is much harder. Agents often score far short of human performance. For one, healthcare is complex—filled with edge cases, exceptions, and contextual nuance. As legendary software engineer Steven Sinofsky noted, automation is ultimately about handling exceptions, not the routine.
Several technical barriers stand in the way. Healthcare data is deeply siloed and fragmented. Lisa Bari, Head of External Affairs at Innovaccer, warns against deploying agents without full contextual data.
Also, while LLMs enable agents to handle a wide range of inputs, they can produce uncontrolled outputs. Longer conversations and more contextual data can reduce accuracy and increase latency. Moreover, error rates compound across multi-step processes. For example, an agent with 98% per step will complete a five-step task successfully only 90% of the time (0.98⁵).
Model accuracy decreases with additional agentic steps.
Developers use various strategies to make agents more reliable. As Sword Health product lead Rik Renard, RN, emphasized, “Evaluating agents’ output against pre-specified criteria is essential for deploying reliable agents, yet few people discuss this.”
Many agentic systems use specialized knowledge graphs to contextualize information and “coordinating agents” to link multiple, narrow task-specific agents. Technical guardrails help ensure agents stay within scope and flag questionable output for human review.
Still, picking the right use cases is critical. Notable Chief Medical Officer Dr. Aaron Neinstein told me that his company first deploys agents in low-risk areas (e.g., patient intake) to build trust before expanding into more complex workflows.
Even with clear use cases, deployment remains hard and no shortcuts exist. As Cedar CEO Dr. Florian Otto summed it up, “Agents must be built workflow by workflow and only deployed when they reliably work well.”
Agents must also integrate with other tech systems—like EHRs and CRMs—to access contextual data and execute tasks. Most use native API integrations, though some interact through the same point-and-click interfaces that healthcare workers use.
Ultimately, in an “agentic economy,” agents must interact with one another—communicating information, transferring resources, collaborating, and tracking transactions. This will require persistent identity and seamless communication protocols, which developers are now building. Several companies, including Salesforce, Microsoft, and Innovaccer, have launched platforms to orchestrate multi-agent healthcare workflows.
“Any sufficiently advanced technology is indistinguishable from magic.”
Barriers To Change
Arthur C. Clark famously explained, “Any sufficiently advanced technology is indistinguishable from magic.” If you haven’t interacted with an AI agent or tried a modern voice model, you should. This isn’t your mother’s old pharmacy’s IVR system. The technology has crossed the uncanny valley — it feels like magic.
But unlike magic, it’s not infallible. In high-stakes situations, unreliable AI can cause real harm. As extreme examples, consider how the National Eating Disorder Association’s chatbot “Tessa” encouraged users with eating disorders to diet, or how a Character.AI companion allegedly pushed a teenager to commit suicide.
Agents, however, are more than chatbots. They are tireless digital workers who are always ready to complete specific tasks. When stitched together, they can form multi-agent systems—or “agent swarms”— that handle complex, interdependent processes and behaviors.
Yet, US and EU regulators have approved exceedingly few healthcare AI agents. Hardian Health’s Dr. Hugh Harvey warns, “Health systems and clinicians using unregulated AI agents must accept all the risk.” Will they be willing? While regulatory approval is cumbersome, it may be necessary to speed adoption.
Also, unlike magic, AI agents aren’t plug-and-play. Implementing agents is a massive change-management undertaking.
Patients will need to adjust their expectations and learn to interact differently with technology and healthcare. In his book Alchemy, Rory Sutherland explains that in our “unrelenting quest” for greater efficiency, we often forget to ask “whether people like efficiency as much as economic theory believes they do.”
Take the “doorman fallacy:” a hotel that replaces doormen with automatic doors may save money, while overlooking the other valuable functions doormen provide, such as hailing taxis, providing security, welcoming guests, and signaling the hotel’s status.
Similarly, healthcare workers often do far more than their simple job description. For example, if agents automate scheduling, who will reassuringly mention that “everyone loves Dr. Smith” or that she tends to run late at the end of the day?
The “doorman fallacy” explains that hotel doormen do far more than open doors.
Of course, healthcare workers aren’t perfect or always available. Several company leaders I spoke with say patients prefer interacting with their agents over healthcare workers. But this remains to be seen. Outside healthcare, Klarna—the buy now, pay later company—recently walked back its ambitious efforts to replace two-thirds of its customer service workforce with AI agents. It turns out many customers still want to talk to real people.
Agents will also reshape how healthcare workers do their jobs. By offloading drudgery, agents could empower some. Yet others may resent having to babysit new digital coworkers that could potentially replace them.
Interestingly, one company CEO shared that executives and managers–not frontline staff–are often the most resistant. Perhaps they worry agents will shrink their teams and reduce their influence. Or, more likely, they may feel daunted by all that responsible deployment demands: surfacing tacit knowledge, defining ground truths, streamlining workflows, and retraining workers for new forms of human-AI collaboration.
Infinitus CEO Ankit Jain explained his company “sells outcomes, not technology.” It focuses on supporting change management, recognizing that “organizations must be able to crawl before they walk and then run.”
For agents to succeed, organizations must look inward. Technology is an amplifier. It can boost productivity or magnify inefficiency. Healthcare organizations, which strongly pull towards inertia, must actively reexamine their operations, rebalance their workforces, and reinforce their digital governance. Otherwise, poorly integrated agents could cause confusion and chaos.
Optimizing workflows is essential. So is relieving downstream constraints. For example, a flawless appointment-scheduling agent is of limited value if doctors’ schedules are always full. An outreach agent that flags patient needs is only helpful if there are enough nurses and clinicians available to respond.
Taken together, these developments point toward a future that’s both promising and uncertain.
“Technology is neither good nor bad; nor is it neutral.”
Where are we heading?
Healthcare’s digital history has taught us hard lessons: technology can help, but it cannot miraculously solve all problems. And tech doesn’t work in isolation—lasting change depends on rethinking people, processes, and priorities, not just deploying tools.
Roy Amara famously observed, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” That seems likely with AI agents.
In the near term, agents will make existing workflows faster and cheaper—answering calls, managing intake, and making appointments. Next, they may improve those workflows—coordinating across channels, adding personalization, and responding with context. Eventually, they may enable entirely new approaches, with networks of agents operating semi-autonomously across systems.
At the heart of this evolution is a core tension: leverage versus certainty. Agents promise a kind of abundance—tireless labor at negligible cost. But that leverage introduces risk. For now, they’ll likely remain in administrative domains, where errors are less costly and rarely dangerous.
Still, care delivery is also quite inefficient. Care models for both acute and chronic illness have barely changed in decades—and the clinician-patient encounter remains healthcare’s choke point.
Here, too, agents may help: handling triage, guiding protocol-driven decisions, even managing chronic conditions. Much of this is already technically feasible. But real progress will require much more: rigorous evaluation, regulatory clarity, updated business models, cultural acceptance, redesigned teams, and seamless escalation paths to human care.
Melvin Kranzberg’s First Law reminds us: “Technology is neither good nor bad; nor is it neutral.” The promise of agents is real—but conditional. Their impact depends on how we design, deploy, and govern them.
Will agents make care more personalized—or more transactional? Will they return time to clinicians—or reduce their autonomy and turn them into machine supervisors? Will they bring people closer together—or insert more distance? Will they relieve burden—or hollow out the human core of care?
Agents are coming. What they become depends on us.
I thank the following people for discussing this topic with me: Ray Chen and Jonathan Fullerton (Ambience Healthcare), Jeffery Liu and Jon Wang (Assort Health), Florian Otto (Cedar), Hugh Harvey (Hardian Health), Alex Cohen (HelloPatient), Rick Keating (Hippocratic AI), Ankit Jain (Infinitus), Abhinav Shashank and Lisa Bari (Innovaccer), Pankaj Gore (Insight Health), Kesava Kirupa Dinakaran (Luminai), Aaron Neinstein and Tushar Garg (Notable), David Atashroo (Qventus), Rouhaan Shahpurwala (Sully.AI), Rik Renard and Kevin Wong (Sword Health), Maria Gonzalez Manso (Tucuvi), Parag Jhaveri (VoiceCare AI), Sergei Polevikov (WellAI), and Stuart Winter-Tear.