WHO urges governments to clarify liability for AI in healthcare
AI is moving faster than the rules that keep patients safe. The World Health Organization (WHO) is urging countries to set national AI strategies for health, invest in workforce skills, and - most urgently - clarify who is responsible when an AI system makes a mistake or causes harm.
"We stand at a fork in the road," said Dr Natasha Azzopardi-Muscat, director of health systems, WHO/Europe. "Either AI will be used to improve people's health and wellbeing, reduce the burden on our exhausted health workers and bring down healthcare costs, or it could undermine patient safety, compromise privacy and entrench inequalities in care. The choice is ours."
What the survey shows
WHO/Europe surveyed 50 of 53 member states. Adoption is real, but governance is lagging.
- 32 countries (64%) already use AI-assisted diagnostics, especially in imaging and detection.
- Half of countries use AI chatbots for patient engagement and support.
- 26 countries (52%) have identified priority AI areas, but only about a quarter have funding to deliver on them.
- Top motivations: improve patient care (98%), reduce workforce pressures (92%), and increase efficiency/productivity (90%).
- Only four countries (8%) have a dedicated national AI strategy for health; seven more (14%) are developing one.
Accountability is the missing piece
Fewer than one in ten countries (8%) have liability standards for AI in health. Legal uncertainty is the top barrier to adoption (86%), followed by financial constraints (78%).
"Without clear legal standards, clinicians may be reluctant to rely on AI tools and patients may have no clear path for recourse if something goes wrong," said Dr David Novillo Ortiz, regional advisor on data, artificial intelligence and digital health. He urged countries to clarify accountability, create redress mechanisms, and ensure safety, fairness and real-world effectiveness testing before AI reaches patients.
What good governance looks like in practice
- Clear liability rules: specify who owns clinical decisions, when responsibility shifts to vendors, and how insurance covers AI-related harm.
- Risk-tiering and approvals: higher-risk uses get deeper review, independent clinical safety sign-off, and staged rollout.
- Evidence before scale: require intended use, validated performance on local data, bias testing across subgroups, and fail-safe workflows with human oversight.
- Post-market monitoring: track model drift, performance, incidents and patient feedback; pause or rollback when thresholds are breached.
- Transparency and consent: explain where AI is used, how results are produced, and what recourse exists; document consent or provide clear opt-outs where appropriate.
- Vendor guardrails: demand model cards, data sheets, audit logs, cybersecurity assurances, update plans, and decommissioning routes in contracts.
- Redress mechanisms: publish routes for complaints, clinical review, and compensation when harm occurs.
- Skills and culture: train clinicians, managers and informatics teams to evaluate and safely use AI in routine care.
Keep people at the center
"AI is on the verge of reshaping healthcare, but its promise will only be realised if people and patients remain at the centre of every decision," said Dr Hans Henri P. Kluge, WHO regional director for Europe. The message: make AI serve clinical realities, not the other way around.
Examples across Europe
Estonia has linked electronic health records, insurance data and population databases into a unified platform that supports AI tools. Finland is investing in AI training for health workers. Spain is piloting AI for early disease detection in primary care.
Inside the NHS: momentum, with caution
Interviews with NHS trust digital leaders show active pilots across back-office and clinical areas. Some organisations have formal AI policies and ethics committees; others admit it "feels very wild west" and are racing to put guardrails in place.
NHS England chief executive Jim Mackey warned against assuming AI is a single fix. He called for "socialising" AI across clinical and operational settings and "finding common ground" - it won't transform everything overnight, and it isn't too risky to use with proper controls. Dr Birju Bartoli, chief executive at Northumbria Healthcare NHS Foundation Trust, said public confidence rests on openness: be upfront about why AI is used and what checks exist. Patients want fast access, good outcomes and honest conversations - if AI helps with that, it has a place.
What healthcare leaders can do this quarter
- Map your AI use cases: classify by risk, intended use and clinical oversight required.
- Set accountability: define decision rights, escalation paths and insurance coverage; involve legal early.
- Tighten procurement: require evidence on performance, bias, safety, security, data handling, updates and support.
- Validate locally: test on your population, document limitations, and pilot in controlled settings before wider rollout.
- Build a safety net: implement monitoring, incident reporting, kill-switches and periodic re-validation.
- Be transparent with patients and staff: plain-language notices, consent where appropriate, and feedback channels.
- Invest in skills: schedule training for clinicians, operational leaders and data teams so AI augments care, not workarounds. For structured options, see AI courses by job role.
- Secure funding beyond pilots: budget for validation, monitoring, model updates and liability cover.
Resources
The takeaway for healthcare leaders: keep pace with adoption, but anchor it in clear accountability, real-world evidence and patient trust. Do that, and AI can reduce pressure on staff while improving outcomes.
Your membership also unlocks: