AI in health care: staying ahead of the issues
AI is moving fast in health and care. The upside is big, so are the risks. The question is how to proceed with clarity, speed and guardrails.
Stay one step ahead
The UK needs a standing way to track, anticipate and pressure-test near- to medium-term AI developments in health. Right now, that mechanism doesn't exist, even though the stakes are high.
We also need a mindset flip. The risks of the current state of care are plain. Demand is outstripping supply and the trend is upward. Unless things significantly change, we'll face much larger problems of access and quality of care.
Progress will be disruptive. Professional boundaries, governance, and institutional power will shift. Better to anticipate those shifts now than react later.
The current state of play
Most capital and energy today goes into basic science and narrow clinical tools. Public health is under-served, despite its potential to influence outcomes at scale. If markets won't correct that, policy will need to.
Some countries are putting a clear stake in the ground. The US Department of Health and Human Services has set out a comprehensive approach to AI in health, with practical near-term goals and cross-government alignment to amplify impact. See HHS's AI work for context here.
In the UK, the focus is shifting to safe, fast assessment and spread of effective AI. The MHRA, led by Lawrence Tallon, has launched the National Commission into the Regulation of AI in Healthcare to tackle this head-on.
Regulate for learning, not perfection
Regulators face a growing wave of AI products and adaptive systems that change as they're used. A single pass/fail approval won't cut it. We need a "safe enough to try now" approach, paired with continuous, real-world review.
No regulator can carry this alone. Local services will need to run impact checks in practice. Experts like Erik Mayer, Adnan Tufail and Lydia Ragoonanan point to the value of local evaluation, real patient data and feedback loops to see what actually works.
For reference, the MHRA's work on software and AI in devices outlines a flexible direction of travel here.
Make evaluation routine and fast
The NHS needs quick, cheap, standardised tests to see what delivers measurable value. If we don't generate trustworthy evidence, we create an information vacuum filled with vendor claims and political pressure.
Kaiser Permanente's approach, described by Andy Bindman, offers useful signals. It relies on scale, high-quality electronic records, mature clinical governance, buy-in from clinicians, and a culture of systematic trials with rapid feedback from staff and patients.
In the NHS, a network of federated, relatively mature sites could act as permanent test beds. Shared methods, shared metrics, shared reporting. Results that others can reproduce, not one-off pilots that fade.
What healthcare leaders can do now
- Stand up a national horizon-scanning function for AI in health, with tight links to service delivery and public health.
- Rebalance incentives to spur AI for prevention and population health, not just individual diagnostics.
- Build regulatory sandboxes plus post-deployment monitoring so "safe to try" becomes "proven in practice."
- Adopt a common evaluation toolkit: baseline metrics, comparator groups, bias checks, safety triggers, and public reporting.
- Align AI in health with welfare, education and environmental policy to compound benefits across services.
- Invest in workforce skills so teams can assess, deploy and govern AI responsibly. Curated options by role are a good start here.
Final thought
The NHS AI strategic roadmap is expected. The test is whether it goes beyond high-level ambitions into practical, future-facing execution. Set up the mechanisms now to learn fast, improve safely and keep the public's trust.
Your membership also unlocks: