Europe and US agree on joint AI guidance for medicines development
Europe's EMA and the U.S. FDA have agreed on ten shared principles for using AI across the medicine lifecycle. The goal is clear: accelerate safe, effective therapies, reduce shortages, and lower cost without compromising patient or animal safety.
AI is now embedded in discovery, trials, manufacturing, and post-market safety. This framework gives regulators and teams a common language to validate models, manage risk, and keep humans in control.
European Commissioner for Health and Animal Welfare, Olivér Várhelyi, noted that these principles are a first step in renewed EU-US cooperation on novel medical technologies, aiming to keep innovation competitive while protecting patients.
The ten principles at a glance
- 1) Human-centric and ethical: AI must respect human values and patient welfare.
- 2) Risk-based use: Validation, mitigation, and oversight scale with context-of-use and model risk.
- 3) Standards and GxP: Follow legal, ethical, scientific, technical, cybersecurity, and GxP requirements.
- 4) Clear context-of-use: Define the role, scope, and decision boundaries of each AI application.
- 5) Multidisciplinary teams: Integrate domain experts and AI/ML specialists throughout the lifecycle.
- 6) Governance and privacy: Protect sensitive data and enforce strong access, lineage, and audit controls.
- 7) Good model and system development: Prioritize transparency, reliability, generalizability, and technical stability to protect patients.
- 8) System-level performance: Assess the full workflow, including human-AI interaction, with fit-for-use data and metrics.
- 9) Ongoing monitoring: Schedule re-evaluation to manage drift, degrade modes, and real-world shifts.
- 10) Plain-language communication: Provide accessible information on purpose, performance, limits, data, updates, and explainability.
Why this matters
The principles cover evidence generation and monitoring end-to-end: discovery, trials, manufacturing, release, and pharmacovigilance. Expect them to anchor formal guidance, align global practices, and support controlled testing of new AI methods in regulatory settings.
This builds on the EMA's prior work and the EU pharmaceutical legislation that opens the door to AI in regulatory decision-making and structured sandboxes.
What changes for R&D, data science, and engineering teams
- Context and risk first: Classify AI use cases by impact (e.g., decision support vs. automated control) and match validation depth to risk.
- Documentation by design: Maintain model cards, data sheets, and protocol histories that map to GxP and inspection needs.
- Data governance: Track provenance, consent, bias controls, and transformations. Secure PII/PHI and sensitive manufacturing data.
- MLOps you can defend: Version everything, implement drift detection, set retraining triggers, and log human-AI decisions.
- Human oversight: Define who can override the system, under what conditions, and how those actions are recorded.
- Vendor and model risk: Assess third-party tools (security, validation evidence, support for audits) before integration.
- Plain-language outputs: Prepare user- and patient-facing summaries that explain purpose, limits, and updates.
Practical steps to start this quarter
- Inventory every AI use case and define its context-of-use and model risk.
- Draft validation plans tied to risk: data quality, bias checks, performance thresholds, and stress tests.
- Stand up monitoring: metrics, alerting, change control, and periodic re-approval gates.
- Implement audit-ready data lineage and access controls for sensitive datasets.
- Create a cross-functional review board (clinical, stats, QA, manufacturing, safety, security, AI/ML).
- Write the plain-language documentation now; don't leave it to the end.
Policy signals and what to watch
Expect more detailed guidance that maps these principles to concrete submission expectations. Watch for sandbox programs and updates within the EMA network strategy to 2028 focused on data, digitalisation, and AI.
Background reading worth bookmarking:
- EMA reflection paper on AI in the medicinal product lifecycle
- FDA: Artificial Intelligence in drug development
For teams building compliant AI
If your organization is standing up AI in discovery, trials, or manufacturing, align skills with these expectations. A focused learning path by role can speed that process.
Explore AI courses by job for engineers, data scientists, analysts, and product teams working in regulated environments.
Your membership also unlocks: