Now live in ophthalmology

Blog
·

What 150,000 Conversations Taught Us About Scaling AI Care

Six years of real-world research and more than 150,000 patient conversations later, we have shown that there is a better way to deliver healthcare.

Dr Nick de PenningtonFounder & CEO
Assisted by AI
Claude Opus 4.5Claude Opus 4.5
What 150,000 Conversations Taught Us About Scaling AI Care
Summarize
Summarize with ChatGPTSummarize with Claude

In the late 2010s, I was a neurosurgery resident and I was confused. Why did I, and everyone around me, still deliver healthcare essentially the same way it had been since the hospital I worked at was founded over 250 years ago (in 1758)?

Despite extraordinary advances in diagnostics and treatment, care was still organized around a centuries-old 'outpatient' model. Patients traveled to hospitals at fixed times, often repeatedly, to fit the schedules and physical constraints of clinicians and buildings. Capacity was defined by clinic rooms and appointment slots, not by patients' needs or the realities of their lives. That tension was the result of a deeply embedded professional model that privileges clinician time over patient time and treats access as an administrative problem rather than a clinical one.

As a resident, this felt increasingly incoherent. We talked about patient-centered care, yet our systems assumed that care only happened when a patient was physically present, at a time convenient to the institution. Conversations that should have happened earlier, later, or remotely were deferred into the next available clinic slot, sometimes months away. Risk, anxiety, and uncertainty accumulated in the gaps between appointments, invisible to the organization but very real to patients. At the same time, services struggled under a growing mismatch between demand and capacity.

I thought there must be a better way, I leaned into quality improvement, and had always been involved in technology innovation. The answer was to combine both, and this is what eventually became Ufonia. Six years of real-world research and more than 150,000 patient conversations later, we have shown that there is a better way.

The Hypothesis

Ufonia was founded on a simple hypothesis: that a significant proportion of clinical consultations follow predictable patterns, and that AI could safely handle these interactions, freeing clinicians to focus on patients who genuinely need human expertise.

I'm a UK-trained surgeon and the UK National Health Service (NHS) has been our proving ground. Not because it was easy, but because it was hard. It is the largest integrated single payer health system in the world. That might seem an ideal place for coordinating change that affects multiple stakeholders. Unfortunately not. Fixed budgets, salaried staffing, and limited commercial incentives create constraints that strip away superficial innovation. So, if transforming the model of care with AI could work here, it could work anywhere.

From the beginning, we anchored our work in research and safety. As a clinician, I knew that trust would not come from a vision. If colleagues were ever going to accept AI taking on clinical work, they would demand evidence.

What Scale Teaches You

There are key differences between a 50-patient trial and 150,000 real-world conversations. Trials tell you whether something can work. Scale shows you what actually changes.

We've learned a lot from that difference. After 50 patients, you can demonstrate parity with individual technical measures like accuracy. After 150,000 patients, you learn about the effect on systems of care. Patients, clinicians and administrators recommend Dora to others. Stereotyped assumptions about who will and will not use AI begin to fall away.

Age, for example, does not predict acceptance. Engagement is high across all demographics but highest in those in their 70s. What mattered was not the technology itself, but whether the interaction was designed to help them. People prefer a well-designed experience that meets them where they are, to a legacy process that is inconvenient and opaque.

The Safety Imperative

Even the simplest clinical conversation carries risk. Incorrect information can cause harm. Missed warning signs can delay care. From the outset, our team treated conversational AI in healthcare not as a natural language processing problem, but as a patient safety problem.

Our approach was to build safety frameworks before scaling, not after. Our software embeds proprietary systems that continuously validate and stress-test clinical interactions. ASTRID evaluates question answering against evidence-based standards. MATRIX uses large-scale simulation to identify failure modes before they ever reach patients. These are not compliance artifacts. They are operational tools that sit at the core of how the system functions.

We were also clear about where AI should not operate. Simple yes or no exchanges are better served by basic messaging. AI is valuable where nuance and complexity matter, but only with appropriate oversight. Dora is designed to recognize its limits, and human-in-the-loop review is a fundamental part of the system rather than an afterthought.

The Capacity Crisis Is Not Coming. It Is Here.

The NHS experience made one thing unmistakable. Workforce constraints are not a future problem. They are already shaping how care is delivered.

Ophthalmology illustrates this clearly. It is the busiest outpatient specialty in the NHS, accounting for around ten percent of all activity. The Getting It Right First Time program has the modest target that high-volume cataract lists should complete eight or more procedures in four hours. Some units struggle to reach half that number, not because surgeons are slow, but because the surrounding pathway is inefficient.

The United States faces the same mathematics with different inputs. The American Academy of Ophthalmology projects a thirty percent workforce gap by 2035. A twelve percent decline in surgeon supply is set against a twenty-four percent increase in demand. The average ophthalmologist is now in their mid-fifties.

The question is no longer whether routine clinical interactions will be automated. It is when, by whom, and under what safeguards.

Introducing "The Transformation"

These challenges and opportunities are the foundation for a new podcast series we have been recording. Next week we are launching 'The Transformation'; conversations about how healthcare in the age of AI is evolving in practice, not in theory.

The first episode brings together Dr Sarah Maling, an NHS consultant ophthalmologist deeply involved in pathway redesign, and Dr Scott LaBorwit, a high-volume US cataract surgeon operating in a fundamentally different commercial environment. Their conversation explores their perspectives from either side of the Atlantic, what differs, what is shared, and what each system can learn from the other.

While ophthalmology is our starting point, the issues extend far beyond it. Rising demand, constrained supply, and care models designed for a different era are now universal features of healthcare systems. Future episodes will examine AI safety in clinical deployment and the role of robotics in surgery.

What Comes Next

The podcast is the start of a conversation. It is supported by a newsletter that you can subscribe to here.

As we expand into the US and other specialist services in 2026, we will be sharing openly what we are learning, what is proving harder than expected, and where we believe real progress is possible.

Further information on our US Launch Partner Program can be found on our website.

Share
Share on XShare on LinkedIn