Lessons from Building Regulated Healthcare AI Products
Anyone with access to Claude can build a healthcare chatbot in a weekend. But getting from a demo to a regulated product clinicians can actually use with their patients can still take months or years of work. Here are the things I believe are critical to get adoption and are difficult to shortcut.

Anyone with access to Claude can build a healthcare chatbot in a weekend and now OpenAI is actively encouraging people to share their health data. The technology is remarkable and it's the most exciting time to be building in healthcare.
But even as product development accelerates, getting from a demo to a regulated product clinicians can actually use with their patients can still take months or years of work. I've spent over a decade building healthcare products, the last few years at Ufonia where we've completed over 150,000 autonomous clinical conversations using AI. I've put down the things I believe are still critical to get adoption and are difficult to shortcut:
1. You have to earn clinician trust
If clinicians don't trust or understand your product, it won't be used. Their expertise is clinical, not necessarily technical, so the obligation is on you to explain the details of how your product works. They're professionally accountable for their patients. If something goes wrong, it's their registration and career on the line. So they'll want to know how it handles edge cases, and generally they'll expect published evidence about the benefits of your product.
At Ufonia, peer-reviewed research is often the first thing clinical teams ask about. We've published over 20 studies across safety, accuracy, and patient acceptability, in journals like Nature Medicine. That body of work now opens doors and each new study and each new site makes the next one easier. The evidence pipeline compounds so it's good to start as soon as you can.
2. You have to navigate complex healthcare systems
In Bill Aulet's book Disciplined Entrepreneurship, he has a chapter on determining the Decision Making Unit (DMU) for your product. In large integrated single payer health systems like the NHS the DMUs can be very complex. For Ufonia our Champions, End Users and Primary Economic Buyers are all different people and they rarely talk to each other. You need to build relationships with each stakeholder separately and understand what they each need.
When your product is being used in a busy, large distributed healthcare system, you can't assume that impact will be felt by people not on the front line of care. Early on at Ufonia, we'd completed a few hundred calls but when we spoke to senior people in the department, they felt the product wasn't helping them; they just weren't in the loop about what it had actually done. Now we make sure the right people can see the impact, not just those on the frontline.
Understanding who needs to see the impact, who needs to advocate for you, and who needs a business case is as important as the product itself. Map your DMU early and be deliberate about visibility from day one.
3. You have to understand real patient situations
Most patients have never been asked how they'd improve the service they receive. And when companies do try to understand patients, they often fall back on easy stereotypes. Our average patient age is 75, and we routinely see engagement that would surprise teams who assume older patients won't use AI.
It is genuinely hard to talk to real patients and in addition the situations you're designing for might be very temporary. Take cataract surgery: a patient at 1 week post-op will probably still have redness and pain which they might be anxious about. At 4 weeks, the redness and pain should have subsided; so now you're not giving reassurance, you're picking up potentially problematic issues. What patients need at these specific time points is very difficult to guess.
To actually talk to real patients, you need ethics approval and formal agreements with trusts. That infrastructure takes time to build, but it's worth the upfront investment. Guessing what patients need is slow and expensive.
4. You have to embrace the regulation
Healthcare is a regulated industry, both the people and the products. Our product is a medical device, which means we have to prove it's safe. There's a lot of legacy. IEC 62304 was written in 2006 for a different era of software. Regardless, you have to read the manual, understand it, and crucially interpret it for scenarios it wasn't originally written to manage.
We've built regulatory expertise in-house rather than relying on consultants. It's slower in the short term, but it pays back when your team can build internal tooling to support submissions, and create your own testing frameworks like MATRIX and ASTRID when the regulations tell you what to prove but not how. Regulation isn't the thing that slows healthcare AI down, but misunderstanding it is. It's worth learning it deeply yourself.
What this looks like in practice
Last year we released an LLM-based version of Dora into live production, not as a proof of concept, but handling real patient care across NHS trusts. We used MATRIX and ASTRID to prove the system behaved correctly before it reached patients. We leaned on six years of peer-reviewed evidence and our CQC registration to give clinical teams confidence. We tested with real patients through the ethics infrastructure we'd already built. And we made sure the right people in each trust understood what the product was doing and why it mattered. None of that was quick to build. But it's what it takes to ship software that actually delivers care, not just an impressive demo.
We're now working on bringing this to new markets and building new capabilities, which we'll share more about soon. If you're building AI for healthcare and grappling with any of this, I'm always happy to compare notes. Reach out on LinkedIn or by email, I'm based between London and Oxford, let's grab a coffee!
