MHRA Opens Consultation on Future Regulation of AI in Healthcare
The UK's Medicines and Healthcare products Regulatory Agency (MHRA) has opened a public call for evidence on how AI in healthcare should be regulated. The consultation runs from 18 December 2025 to 2 February 2026 and invites input from patients, clinicians, industry, healthcare providers, and the wider public.
This work supports the newly established National Commission into the Regulation of AI in Healthcare, which will advise the MHRA on long-term policy. The goal is clear: keep patients safe, support useful innovation, and make regulation proportionate to risk.
Why this matters for healthcare leaders
AI is already embedded in diagnostics, screening, workflow optimisation, and patient-facing tools across the NHS. As systems become more adaptive, traditional device rules are being stress-tested.
MHRA leadership has emphasised trust, safety, and proportionality. Commission Chair Professor Alastair Denniston highlights that performance in labs isn't enough; the real test is safe, reliable use in real clinical settings. Deputy Chair Professor Henrietta Hughes underscores that patient voices are critical because they live with the outcomes-accuracy, privacy, and access.
What the MHRA is asking
- Do current regulatory rules need updating for AI used in healthcare?
- How should emerging risks be identified and addressed quickly, including in adaptive and learning systems?
- How should accountability be shared between regulators, developers, providers, clinicians, and users?
The consultation is open to everyone-no deep technical background required. The Commission will use the feedback to inform recommendations to the MHRA in 2026.
Who should respond
- Patients, carers, and the public
- Healthcare professionals and clinical leaders
- NHS and independent healthcare providers
- Technology companies and developers
How to prepare a useful response
- Map where AI is already used in your service (triage, imaging, decision support, admin) and document outcomes and gaps.
- Describe your current governance: validation, human-in-the-loop checks, incident reporting, and clinical safety cases (e.g., DCB0129/DCB0160 equivalents).
- Flag real-world risks you see: model drift, bias, opaque outputs, integration issues, alert fatigue, and data privacy concerns.
- Propose practical accountability splits: who approves deployment, who monitors performance post-launch, and how patients are informed.
- Share what would help you use AI safely: clearer post-market monitoring, update notification requirements, transparency standards, and procurement guidance.
Timeline and where to submit
The call for evidence is open from 18 December 2025 to 2 February 2026. You can find MHRA consultations on the UK Government website.
View MHRA consultations on GOV.UK
Helpful resources
Upskilling your team
If you're building AI literacy across clinical, governance, or digital teams, these curated options may help:
Bottom line: this is a chance to influence practical rules that affect day-to-day care. Share what works, what breaks, and what you need to keep patients safe while getting value from AI.
Your membership also unlocks: