AI in healthcare must be safe, fast and trusted
AI is already changing UK healthcare: earlier detection, smarter diagnostics, and voice tools that take notes in the background. The upside is clear-less admin for clinicians, better experiences for patients, and in many cases, safer care.
The problem: our current device regulations were built for static products with slow release cycles. AI doesn't work like that. Models update, drift, and are increasingly marketed directly to the public, often with minimal professional mediation.
AI isn't a cure-all
AI is a tool, not a replacement for clinical judgment. Opaque models, drift, hallucinations, and bias can widen inequalities and deskill the workforce if left unchecked. There's also a risk of losing the human connection that sits at the core of care.
The aim isn't to slow progress. It's to implement guardrails so AI improves outcomes, saves time, and earns trust-without introducing new risks.
From high jump to hurdles
Given the pace of change, a "one big gate" approach to approvals won't work. A better model is a series of sensible hurdles: quicker market entry with clear conditions, followed by continuous real-world monitoring and transparent reporting.
That means the regulatory system must be agile, context-aware, and able to respond quickly when signals emerge. Safety isn't a one-off check; it's an ongoing process.
What safe and effective AI looks like: three roles
- The model
- Clear intended use and clinical context
- Evidence of testing and validation, both in silico and in real clinical settings
- Transparent training data sources and bias controls
- Versioning, change logs, and rollback plans - The supplier
- Sound quality and performance management across the lifecycle
- Rapid, transparent reporting of safety signals to the regulator and customers
- Clear plan for updates, retraining, and decommissioning
- Plain-language documentation clinicians can trust and use - The deployer (clinician or provider)
- Defined use cases, inclusion/exclusion criteria, and human-in-the-loop oversight
- Local validation, workflow integration, and audit trails
- Bias checks on local populations and monitoring for drift over time
- Staff training, patient communication, and incident escalation routes
Practical steps for NHS teams and providers
- Start with a narrow, clearly defined use case. Document risks, mitigations, and fallback procedures.
- Pilot before scale. Compare AI-assisted workflows to standard care; measure impact on safety, time, and outcomes.
- Keep a model register: version, data sources, approvals, owner, known limits, and update history.
- Build a simple monitoring dashboard: accuracy, error patterns, disparities by subgroup, and near-misses.
- Set patient-facing standards: when AI is used, what it does, and who to contact if something feels off.
- Create a clear safety signal pathway: internal escalation and reporting to the regulator where appropriate.
- Train the workforce on intended use, limits, and override criteria. Treat this like medicines safety training.
National Commission: a clearer path ahead
In October 2025, the Medicines and Healthcare products Regulatory Agency (MHRA) launched the National Commission into the Regulation of AI in Healthcare, chaired by Professor Alastair Denniston. The commission brings together global experts and specialist working groups to shape solutions that keep patients and staff safe while enabling faster, high-quality deployment.
Over the next year, the commission will advise on a new regulatory framework for AI as a medical device. The goal is simple: enable the NHS to adopt effective AI faster, with continuous oversight and public trust built in from day one.
For more information on the MHRA's approach to software and AI as medical devices, visit the official guidance. To register interest in the public call for evidence, contact: info@mhra.gov.uk.
What success looks like
- Faster access to proven AI-paired with ongoing, transparent monitoring.
- Clear roles and accountability for model creators, suppliers, and deployers.
- Simple, honest communication with patients about how AI is used in their care.
- Equity checks baked into design, deployment, and post-market surveillance.
- Upgrades that improve performance without breaking workflows or safety.
AI can improve care and reduce pressure on teams. With the right checks-before and after deployment-we can deliver safer services, faster decisions, and better outcomes for patients.
If your team needs structured upskilling on safe, practical AI adoption in healthcare, explore role-based AI courses.
Your membership also unlocks: