Trust the process? Healthcare AI is battling security and regulatory complexity
Across Europe, healthcare is under pressure from fiscal strain, workforce shortages, and rising expectations. Artificial intelligence sits at the center of that tension: huge upside, clear risks, and a trust gap that won't close itself.
The EU AI Act could set a safer path for AI in care. But fragmentation, workforce stress, and uneven implementation threaten to blunt its impact.
The opportunity - and the warning
At the European Health Forum Gastein, the mood was measured. EHFG President Clemens Martin Auer cautioned that AI could reshape labor markets and stress the social contract if adopted without intention.
Lucilla Sioli from the European Commission highlighted that AI could move health faster than other sectors. Steffen Thirstrup of the European Medicines Agency added the non-negotiables: outputs must be trustworthy, used ethically, and checked by people who know what "good" looks like.
Why trust is still shaky
Ricardo Baptista Leite pointed to a hard truth: spending has climbed for decades while health outcomes and access have not kept pace. The result is widening inequity-those without resources fall further behind.
Clinicians often work with a sliver of what affects health. Much of the signal-environment, behaviors, social context-sits outside the clinic and produces data we rarely use. Leite's stance: don't retrofit old workflows with AI; rethink care delivery around meaningful data and outcomes.
Valentina Strammiello stressed that clear rules support adoption. For patients, stronger guardrails are not a burden-they are the foundation for confidence.
The real barriers
Adoption in health systems is slow. Fragmented laws, rapid tech shifts, and limited institutional capacity stall progress. Data governance, cybersecurity, and post-market vigilance remain weak spots.
Trust hinges on early detection of harm, workable health technology assessment and reimbursement, and clear accountability. Thirstrup flagged the need to protect commercially confidential information while keeping assessments transparent, with human review in the loop to avoid errors that could erode credibility.
Afua van Haasteren called it Europe's "regulatory lasagna": AI Act, MDR/IVDR, EHDS, the Data Act, and more. Each is well-intended, but together they form a maze-especially for SMEs. Diana McGhie warned that compliance costs risk squeezing out smaller innovators.
People and workload
Stefan Eichwalder cautioned that digital tools can backfire-EHRs have been linked to higher stress and burnout. Still, targeted tools like speech recognition can give clinicians back up to an hour a day.
Marco Marsella underscored the double dividend: prevention and early detection are ripe for AI and can deliver high returns while strengthening European technological sovereignty.
What the EU AI Act actually brings
Sioli emphasized the Act's core: transparency, human oversight, data governance, and post-market monitoring. A single framework across the single market beats a patchwork of rules.
Work is underway to align the Act with medical device regulation so companies face one path to conformity-AI or not. A digital omnibus is planned to streamline procedures further.
Strammiello warned against ignoring so-called "low-risk" systems; today's minor tool can become tomorrow's patient-facing risk. Education and digital skills for patients and communities are essential to use AI safely and meaningfully. Eichwalder called for inclusive rollout. Virginia Mahieu reminded attendees that the future is uncertain-so health systems must be stress-tested against very different scenarios.
What leaders can do next
- Define a risk register for AI use cases. Classify decision support and diagnostics as higher risk; treat admin tools as lower risk-but review regularly.
- Stand up a multidisciplinary AI oversight group. Include clinicians, patient voices, data protection, cybersecurity, legal, and operations.
- Tighten data governance. Apply data minimization and purpose limits under GDPR, adopt HL7 FHIR for interoperability, document data lineage, and audit for bias.
- Raise the security bar. Use encryption, access controls, vendor security reviews, and red-teaming. Map controls to ISO 27001 and NIS2. Protect confidential commercial information with strict isolation.
- Make human oversight explicit. Define decision rights, set fail-safes, and require clinician verification for high-impact outputs.
- Monitor in the real world. Track performance drift, false positives/negatives, and incidents; keep audit trails; report issues quickly and visibly.
- Procure with evidence. Require CE marking where applicable, verify conformity with the AI Act, and budget for local validation. Plan for HTA and reimbursement early.
- Protect the workforce. Co-design workflows to cut clicks, not add them. Measure time saved and stress levels. Fund training and change management.
- Be transparent with patients. Offer plain-language explanations, consent where needed, and opt-out routes. Publish a public registry of AI tools in use.
- Build a safe sandbox. Use synthetic or de-identified data to test, with APIs and clear rollback plans before live deployment.
Helpful references
For details on obligations and timelines, see the European Commission's page on the AI Act. For secondary use of health data and interoperability, review the European Health Data Space.
Upskilling your teams
If you're planning AI pilots or scaling safe deployment, structured learning shortens the curve. Explore curated options here: AI courses by job role.
Bottom line
AI can help Europe's health systems deliver earlier, safer, and more equitable care-but only with guardrails, clarity, and trust. The Act sets a floor; leaders must build the practice around it.
Your membership also unlocks: