EU Move to Ease AI Rules Collides With WHO Warning on Patient Safety
AI in healthcare is moving faster than the rules meant to keep patients safe. On the same day the WHO's European office warned of a growing regulatory gap, the European Commission proposed easing parts of the EU's digital rulebook, including elements tied to AI.
The timing matters for hospitals and clinicians deploying AI tools at the bedside. Without clear guardrails, risk shifts to patients and care teams. With unclear or delayed regulation, high-potential tools end up stuck in procurement, or worse, used without sufficient oversight.
What the WHO Found
The WHO/EURO report, based on responses from 50 of 53 countries in the region, flags two primary barriers to AI adoption: legal uncertainty (86% of countries) and financial affordability (78%). The message is blunt: the systems are coming; the safety net isn't.
Liability is the biggest gap. Only four countries have health-specific AI liability standards, with three more in progress. That leaves hospitals and clinicians exposed when an algorithm errs-and patients bearing the consequences of misdiagnosis or mistreatment.
The report also warns about algorithmic bias when training data doesn't reflect the populations served. Combined with weak privacy protections, the risk concentrates on already vulnerable groups.
Inside the EU's "Digital Omnibus" Proposal
The Commission says the package will simplify digital rules and cut costs, especially for SMEs. A key flashpoint: proposed amendments to the GDPR, including changes to definitions of sensitive data and expanded processing under "legitimate interest." Critics warn this could dilute protections for health data.
The Commission argues transparency and the right to object would remain. The proposal still needs approval from the Council and Parliament, so details may shift. For reference, the current GDPR text is here: Regulation (EU) 2016/679.
Medical Device AI Rules: A Delay with Consequences
The Commission is also seeking to delay the rollout of AI Act rules that apply specifically to medical devices-by up to 16 months beyond August 2026. Patient advocates say postponement weakens near-term safeguards for high-risk clinical AI.
Industry groups pushed for even longer, citing overlapping requirements with existing medical device laws and warning of innovation flight. For healthcare providers, a delay extends the period where accountability and evidence standards remain uneven.
Why This Matters for Patient Safety
Without clear liability rules, incident response and redress become murky. If an AI-driven triage tool misclassifies a patient, who is responsible-the vendor, the hospital, the clinician who overrode or accepted the output?
Bias risks compound the problem. Unrepresentative training sets can make certain groups "invisible" to models. Privacy risks are rising too, as expanding data uses make consent, opt-outs, and audit trails essential, not optional.
A Widening Regulatory Gap
Policy-making is fragmented across the region. Wealthier countries-like the UK with its AI Airlock approach-are testing AI medical tools in controlled settings. Many others rely on broad, cross-sector rules (33 countries) that miss health-specific risks.
Public engagement is thin. Only 42% of countries consulted patient associations, and just 22% sought input from the broader public. That's a fast track to build tools that miss clinical realities and community needs.
What Healthcare Leaders Can Do Now
- Stand up an AI clinical governance group that includes clinicians, informatics, legal, ethics, and patient reps.
- Procure with proof: require external validation, performance by subgroup, intended-use clarity, and versioning plans.
- Set data rules: data minimization, de-identification, access logs, human-readable model cards, and audit rights.
- Define liability in contracts: incident reporting within set timeframes, vendor indemnities, and insurance coverage.
- Mandate human oversight: clear escalation paths, override protocols, and explainability appropriate to the use case.
- Run bias and safety testing pre-deployment; monitor post-deployment with drift alerts and outcome reviews.
- Document everything: clinical evaluation plans, risk registers, change logs, and patient-facing explanations.
- Protect patients: simple opt-out pathways, consent refreshers for new data uses, and clear notices at point of care.
- Budget realistically: account for infrastructure, integration, updates, and subscription creep.
- Invest in people: train clinicians and data teams on safe use, limitations, and incident response.
Financing and Equity: Close the Gap
Cost is a top barrier for 78% of countries. Consider AI-aware reimbursement: pay for the safe use of approved AI in defined pathways, as we do for procedures and drugs. Tie payment to outcomes and documented oversight.
Direct investments should prioritize infrastructure, datasets that reflect local populations, and public-interest use cases. Private partnerships need transparency clauses and equitable access commitments.
What to Watch Next
Policy shifts are in motion. The Digital Omnibus will move through the Council and Parliament. The AI Act's medical device timelines may slip, but safety expectations will not. Keep an eye on WHO guidance for health AI ethics and governance: WHO ethics and governance of AI for health.
As WHO's Hans Kluge put it, "The rapid rise of AI in healthcare is happening without the basic legal safety nets needed to protect patients and healthcare workers." Natasha Azzopardi-Muscat added, "We stand at a fork in the road." For healthcare leaders, the choice is practical: build the safety net now, and demand proof before deployment.
Upskilling Your Team
If you're formalizing AI governance and need role-based training for clinical, data, and compliance teams, explore options here: AI courses by job.
Your membership also unlocks: