MHRA opens call for evidence on AI in healthcare - here's what healthcare teams need to know
The Medicines and Healthcare products Regulatory Agency (MHRA) has launched a national call for evidence on how AI should be regulated across the NHS and wider care settings. The process runs from 18 December 2025 to 2 February 2026 and will inform the National Commission into the Regulation of AI in Healthcare.
The goal: clear, practical standards that keep patients safe, give clinicians confidence, and let useful innovation reach practice without unnecessary friction.
What the Commission is asking for
- Modernising the rules: Are current medical device and software regulations fit for AI, or do we need updates for data-driven, continuously learning systems?
- Keeping patients safe as AI evolves: How should the system identify, monitor, and address issues quickly-especially with adaptive models that change post-deployment?
- Clarifying responsibility: Where should accountability sit across regulators, manufacturers, NHS organisations, clinicians, and patients using AI-enabled tools?
Why your input matters
According to MHRA Chief Executive Lawrence Tallon, the aim is safe, proportionate use of AI that earns public trust. The Commission-bringing together clinicians, patient representatives, industry, academics, and government-wants real-world insight from people delivering care and experiencing AI at the point of service.
Professor Alastair Denniston, who chairs the Commission, emphasises that the focus is not just the algorithm-it's how AI is used by professionals and patients within complex pathways. Deputy chair Professor Henrietta Hughes highlights that patients carry the impact of AI decisions, so their lived experience must guide safeguards.
Who should respond
- Clinicians across primary, community, and secondary care
- ICS leaders, CCIOs, CMIOs, CNIOs, Caldicott Guardians, and patient safety teams
- Digital, data, and technology leaders (IT, informatics, procurement)
- AI developers, suppliers, and SMEs in health tech
- Patients, carers, and public representatives
What to include in your submission
- Risk management for adaptive AI: Change-control for models that learn over time, real-world performance monitoring, guardrails for drift, and criteria for pausing or withdrawing tools.
- Clinical safety and human factors: Clear role definitions, confirmation bias safeguards, alert fatigue mitigation, explainability needs, and safe handovers between AI and clinicians.
- Data, equity, and inclusion: Dataset representativeness, bias detection and remediation, accessibility for diverse populations, and protections for privacy.
- Accountability and governance: Who signs off deployment, incident reporting routes, evidence requirements, and liability across vendors and providers.
- Assurance and evaluation: Pre-deployment validation, post-market surveillance, performance thresholds, real-world evidence collection, and meaningful patient outcomes.
- Procurement and commissioning: Minimum assurance artifacts (clinical safety case, DPIA, cybersecurity profile, model card), service-levels for updates, and decommissioning plans.
- Implementation in the NHS: Integration with EPRs, workflow design, training for end users, and resourcing for safe adoption at scale.
Timelines and how to take part
The call for evidence runs from Thursday 18 December 2025 to Monday 2 February 2026. The submission link goes live at 9:30am on 18 December 2025. Visit the official MHRA Call for Evidence page to respond.
Context: what clinicians and staff think today
Public and staff support for AI in care is growing, particularly for administrative tasks that free up time for clinical work. Concerns remain about oversight and the risk of misleading outputs, which reinforces the need for proportionate, clear regulation.
Front-line use is already visible. For example, research from the Nuffield Trust reports that a significant share of GPs are using AI in practice, while also flagging gaps in assurance and governance. You can read their insights here: Nuffield Trust: How are GPs using AI?
The wider UK AI market is projected to reach £1 trillion by 2035, with health and social care expected to see net job gains. The regulatory foundations laid now will influence how benefits reach patients and staff.
Practical next steps for healthcare leaders
- Map every AI or algorithm-enabled tool in use or in the pipeline. Record intended use, data flows, and known risks.
- Establish minimum assurance requirements: clinical safety case (DCB0129/0160 equivalents), DPIA, cybersecurity profile, model documentation, and bias testing.
- Set up a monitoring plan for real-world performance, incidents, calibration drift, and user feedback-and define escalation routes.
- Clarify decision rights: who approves deployment, who owns outcomes, and how issues are reported and resolved.
- Prepare staff training and patient communications that explain capabilities, limitations, and what happens when things go wrong.
- Include decommissioning criteria from day one: when to roll back, retrain, or retire a tool.
Resources
- Nuffield Trust: GP use of AI - insights from the front line
- Complete AI Training: courses by job role (helpful for building AI literacy across clinical and non-clinical teams)
If you work in health or care, your experience is exactly what the Commission needs. Share what works, where the risks are, and what would make AI safer and more useful in real clinical workflows.
Your membership also unlocks: