UK Creates AI Healthcare Commission to Update NHS Rules by 2026

UK sets up a national commission to regulate AI across the NHS, led by Prof Alastair Denniston. Risk-based rules due by 2026; trusts should tighten governance and validation.

Categorized in: AI News Healthcare
Published on: Sep 27, 2025
UK Creates AI Healthcare Commission to Update NHS Rules by 2026

UK forms national commission to regulate AI in healthcare: what it means for the NHS

The UK government has launched a National Commission on the Regulation of AI in Healthcare to accelerate safe, effective adoption of AI across the NHS. The goal is straightforward: update an outdated regulatory environment so useful tools can move from pilots into routine care without putting patients at risk.

The commission is chaired by Professor Alastair Denniston of the University of Birmingham, executive director of the UK's Centre of Excellence for Regulatory Science in AI & Digital HealthTech (CERSI-AI). Members include clinicians, academics, patient safety advocates, and representatives from tech companies such as Google and Microsoft.

Why this matters for NHS teams

Lawrence Tallon, chief executive of the Medicines and Healthcare products Regulatory Agency (MHRA), has acknowledged that current medical device rules were not built for AI. Without modernized regulation, promising applications will remain stuck in limbo.

Officials expect clearer, more transparent rules to support investment, give clinicians and patients confidence, and set practical guardrails. The UK framework will not copy the EU's AI Act; it is expected to be clear, practical, and proportionate to clinical risk.

Timeline and scope

The commission will report to the MHRA and contribute to a regulatory framework expected in 2026. This is pressing for frontline teams already using AI-driven scribes, radiology decision support, diagnostic algorithms that scan large datasets, and adaptive cardiac devices that respond to patient physiology.

Today, many of these tools are still governed by rules drafted more than 20 years ago. The update aims to close that gap.

Risk signals regulators will weigh

Global bodies have raised credible concerns about AI in health, including data ethics, cybersecurity, and bias. The EU's AI Act includes provisions relevant to medical AI, though it has faced criticism from some tech firms for going too far.

The UK approach will seek predictability and proportionality, giving innovators a clear path while protecting patients.

What healthcare leaders should do now

  • Establish AI governance. Appoint accountable clinical safety leads, define human oversight, and set incident reporting for AI-assisted care.
  • Audit current and planned AI tools. Check data provenance, model update processes, bias testing, cybersecurity controls, and documentation of intended use.
  • Tighten procurement. Require post-market surveillance plans, model change notifications, performance dashboards, interoperability details, and audit logs.
  • Validate in real workflows. Run prospective evaluations, monitor drift, and define safe disengagement protocols if performance degrades.
  • Prepare evidence packs. Summarize clinical performance, generalizability, usability, and patient impact to align with likely MHRA expectations.
  • Engage early. Participate in MHRA consultations and provide feedback from clinical, informatics, and patient safety perspectives.

Guidance for clinicians

  • Document how AI informs decisions and where clinical judgment prevails.
  • Be alert to automation bias. Build in checkpoints that require active review of AI outputs.
  • Report adverse events and near misses involving AI features through existing safety channels.

Priorities for digital, data, and security teams

  • Threat model AI-specific risks (model tampering, prompt injection, data leakage). Require vendor security attestations and clear patch/update processes.
  • Minimize data exposure. Use privacy-preserving techniques and review third-party data transfers, storage locations, and retention.
  • Bias and fairness monitoring. Track performance across populations and care settings; agree escalation thresholds with clinical leads.

Expect next

  • Draft guidance and consultations from MHRA addressing adaptive algorithms, change control, and post-market monitoring.
  • Transitional arrangements for legacy AI-enabled devices and software already in use.
  • Clearer coordination with data protection requirements and clinical safety standards.

For broader context, see the WHO's guidance on AI ethics in health and the EU's AI Act overview:

Upskilling your teams

If you lead digital adoption or clinical safety and need structured training on applied AI, explore role-based learning paths:

Bottom line: the UK is moving to give the NHS clear, usable rules for AI. Start aligning your governance, procurement, and validation practices now so you're ready as the framework lands.