Regulating AI in Healthcare: Safe, Fast, and Trusted
Published 19 November 2025
Every generation in healthcare sees a moment that resets what is possible. Antibiotics did it. MRI did it. AI is next - and it is arriving faster than past shifts in clinical practice.
This article distils the core principles proposed for regulating AI in healthcare so patients see the benefits sooner, clinicians keep confidence, and the NHS gains capacity without compromising safety. It invites a national conversation about the kind of system we want to build together.
Who is leading this work
Professor Alastair Denniston - practising consultant ophthalmologist; Professor of Regulatory Science and Innovation at the University of Birmingham; and Executive Director of the Centre of Excellence for Regulatory Science in AI & Digital HealthTech (CERSI-AI) - chairs the MHRA's new National Commission on the Regulation of AI in Healthcare.
His brief is clear: define a framework that works for today's AI systems and the next wave coming tomorrow.
Why AI regulation now
Think of X-rays 130 years ago. The clinical potential was obvious, but safe use took standards, training, and oversight. AI is similar - but updates come in months, not decades. That pace demands a modern approach to safety, evidence, and governance across the full product lifecycle.
The NHS has set out a long-term vision for digital and data-driven care. Delivery now hinges on a regulatory system that is both protective and enabling - so useful tools reach clinics quickly, and stay safe as they evolve.
The framework: three principles
Principle 1: Safe
- Put patient safety at the centre, with risk-proportionate regulation. Standing still can be as risky as adopting a new tool without proper checks.
- Shift from one-off approvals to lifecycle oversight. Require change protocols for model updates, clear versioning, and re-validation when performance-critical elements change.
- Demand evidence that transfers to real care: external validation, representative datasets, fairness testing, human-factors evaluation, and clear clinical oversight.
- Make accountability explicit across the chain: developer, deployer, and clinical team. Everyone should know who is responsible for what, and when.
- Strengthen post-market surveillance with simple reporting, rapid triage of incidents, and learn-once-apply-many improvements across sites.
Principle 2: Fast
- Cut friction from idea to impact. Use "single front door" guidance, predictable timelines, and parallel review where appropriate.
- Back SMEs with clarity: templated documentation, model update playbooks, and fees and timelines that small firms can survive.
- Enable privacy-preserving evidence generation through trusted research environments, federated evaluation, and standardised audit datasets.
- Support safe real-world pilots via regulatory sandboxes with clear entry criteria and exit pathways into routine use.
Principle 3: Trusted
- Make performance and limits clear to clinicians and patients. Label intended use, population, known failure modes, and what human oversight is required.
- Require continuous performance monitoring in live settings, with public signals when a system is restricted, paused, or withdrawn.
- Mandate security and resilience practices that match clinical risk, including protections against data drift and model tampering.
- Address equity head-on with bias assessments before deployment and ongoing checks across diverse patient groups.
From principles to practice: what to do now
- Government and regulators: publish end-to-end guidance that joins procurement, data access, clinical safety, and regulation. Stand up shared sandboxes. Agree common evidence standards across bodies to avoid duplication.
- NHS leaders and ICSs: set local governance for AI adoption, including risk tiering, model update controls, and incident reporting. Fund data curation and annotation where benefits are clear. Build clinical time into AI projects from day one.
- Developers and suppliers: pre-register intended use and update plans. Document datasets, validation, and limitations in plain language. Design for human-in-the-loop workflows and prove time savings or outcome gains in real clinics.
- Clinical teams: agree where AI supports decisions versus where it may influence action. Monitor false positives/negatives and escalate drift early. Involve patients through clear information and consent where appropriate.
What's next: a national call for evidence
The National Commission will soon invite evidence on how AI in healthcare should be regulated. You do not need to be an AI specialist to contribute. If you have practical experience of patient care, safety, procurement, data, or delivery - your input matters.
- What evidence should be required before first use in the NHS, and after updates?
- How do we ensure small, high-quality companies can meet requirements without stalling?
- What reporting and transparency would help your organisation trust AI at scale?
- Where could a sandbox or shared evaluation service remove delay without adding risk?
This is an opportunity to build a system that is safe for patients, workable for clinicians, and viable for innovators. Let's get it right together.
Optional resources for team upskilling
If your team needs foundational training to engage with AI governance and evaluation, see job-based learning paths at Complete AI Training.
Your membership also unlocks: