EU AI Compliance For Medtech: Move From Reactive To Lifecycle Management
AI isn't an add-on anymore. In the EU, it's a regulated safety component that demands ongoing accountability across the product lifecycle.
At a recent industry event in Rotterdam, Fernanda Ferraroli Paro laid it out clearly: shift from reactive compliance to a proactive, lifecycle approach. That means building risk management, postmarket vigilance, and continuous improvement into everyday operations-not treating them like a checkbox before launch.
The shift leadership needs to drive
- Make lifecycle management the default: design, development, launch, and postmarket-one continuous loop.
- Stand up cross-functional governance for AI (regulatory, clinical, quality, security, data science, product).
- Fund postmarket monitoring: real-world performance, incident tracking, data drift, and model updates.
- Tie KPIs to safety, performance, equity, and explainability-then review them at the executive level.
The legal frame you must align to
- EU AI Act: risk-based model with four levels-minimal, limited, high, unacceptable. Most healthcare AI will be high-risk due to impact on diagnosis, treatment, or monitoring.
- Any AI that is a medical device under MDR/IVDR, or a safety component of one, is automatically high-risk.
- Integrated conformity assessment: notified bodies can assess MDR/IVDR and AI Act requirements in a single process to avoid duplication.
- Certification isn't "once and done." Ongoing postmarket monitoring is required.
What high-risk AI must demonstrate
- Data governance: quality, representativeness, and bias mitigation across training, validation, and testing.
- Transparency: explainability, clear labeling, documented purpose, limitations, and level of automation.
- Human oversight: defined control points, override paths, and responsibilities.
- Technical soundness: resilience against failures and attacks, plus accuracy that holds up over time.
Documentation you can't skip
- Intended purpose, risk classification, and justification.
- Training and testing approach, data specifications, and performance claims.
- Human oversight plan and risk mitigation strategies.
- Up-to-date technical documentation available to notified bodies and regulators.
Traceability and auditability
MDR, IVDR, and the AI Act expect traceability across the lifecycle. Use UDI systems and maintain audit trails for software updates and algorithm modifications.
Include an AI system log to support postmarket evaluations. Expect regulators to review how you track changes, monitor performance, and handle incidents.
Transparency in practice
- Clear labeling and user information covering purpose, limitations, and automation level.
- Safety and performance data presented in line with MDR/IVDR requirements.
- Plan for EUDAMED disclosures as the database phases in.
Cybersecurity is now core to compliance
Security is part of the risk management fabric under MDR/IVDR and reinforced by NIS 2 and the Cyber Resilience Act. It's expected across design, development, deployment, maintenance, and decommissioning-not added at the end.
- Embed security controls across the lifecycle and document how you manage cyber risk.
- Harden the supply chain: vendor security, SBOMs, secure updates, third-party risk management.
- Incident detection and response, business continuity, and crisis management plans.
- Staff training on vulnerability management, incident reporting, and secure practices.
- Use coordinated vulnerability disclosure processes-even if not mandatory, they're recommended.
Data protection is foundational
GDPR, the European Health Data Space regulation, and the AI Act all pull in data protection principles. Expect strict consent, purpose limitation, data minimization, and clear governance for secondary use of health data.
Your 90-day action plan
- Classify every AI function: intended purpose, risk class, and whether it's a device or safety component.
- Run a gap assessment against MDR/IVDR and AI Act requirements; log actions, owners, and timelines.
- Stand up an AI risk and quality framework: policy, roles, review boards, and change control.
- Build a postmarket plan: data sources, monitoring cadence, performance thresholds, and triggers for action.
- Implement an AI system log and software update audit trail tied to UDI.
- Close cybersecurity gaps to NIS 2 and CRA expectations; test incident response.
- Tighten documentation: intended use, training/testing evidence, bias checks, oversight, and labeling.
- Train product, clinical, and support teams on transparency, safety, and incident workflows.
- Prep for integrated conformity assessment with your notified body.
- Plan for EUDAMED milestones and public-facing transparency materials.
What Fernanda Ferraroli Paro wants leaders to hear
Compliance is lifecycle work. Build explainability, accountability, traceability, and security into how your teams operate. Fund the systems and people to keep AI safe and effective after launch-because that's what the law now expects.
Useful resources
Upskill your teams
If you're building internal capability for AI literacy, governance, and safe deployment, explore role-based programs here: Complete AI Training: Courses by job.
Your membership also unlocks: