Healthcare AI Lacks Legal Safeguards, Council Warns
A French disability council has flagged significant gaps in legal protections for patients whose care involves artificial intelligence, raising concerns that doctors could increasingly defer decisions to AI systems without adequate oversight.
The National High Council for Persons with Disabilities published the report Thursday, arguing that the doctor-patient relationship must remain fundamentally human. While physicians currently review AI recommendations before acting on them, the council warned that reliance on AI could become standard practice without stronger guardrails.
Validation and Bias Testing Required
The council called for centralised validation systems for medical AI tools and mandatory bias testing to prevent discrimination against specific patient groups. It also proposed an ethical charter for healthcare AI use.
On the technical side, the report emphasised rigorous development methodology. This includes training AI systems on diverse data samples and publishing transparent information about how those systems work. The council said healthcare providers need proper training to use AI tools correctly.
Patient Data Sharing Concerns
The council criticised the European Health Data Space initiative, which allows unrestricted data sharing across Europe. The council said this approach could undermine patient consent and privacy protections.
The council concluded that the European Union's AI Act needs health-specific rules to address these gaps. Learn more about AI for Healthcare and how these systems are developed by exploring Generative AI and LLM technologies.
Your membership also unlocks: