EMA and FDA issue joint AI guidance for medicine development
Two major regulators just moved in the same direction. The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) proposed ten principles for good AI practice to support evidence generation and monitoring across the medicine lifecycle.
For engineering and data teams in pharma, this sets a shared baseline across the Atlantic. Expect tighter expectations on how AI is built, validated, and watched in production-without stalling useful innovation.
Why this matters for IT and development
Olivér Várhelyi said: "The guiding principles of good AI practice in drug development are a first step of a renewed EU-US cooperation in the field of novel medical technologies. The principles are a good showcase of how we can work together on the two sides of the Atlantic to preserve our leading role in the global innovation race, while ensuring the highest level of patient safety."
The principles will inform upcoming EU AI guidance and build on the EMA's 2024 AI reflection paper. The FDA also issued its first guidance on using AI for drug development in 2025, signaling clear momentum on both sides.
New EU initiatives-like the proposed Biotech Act and updated pharmaceutical legislation-make space for AI throughout the medicine lifecycle, including regulatory decision-making.
Using AI for more efficient clinical trials
The message is simple: use AI to speed up research and operations, but prove it's safe, fair, and reliable. That means documented evidence for model performance, continuous monitoring in the wild, and clear human oversight where outcomes affect patients.
What to expect in the 10 principles
The full list isn't published here, but you can anticipate emphasis on the following themes common to regulated AI:
- High-quality data, provenance, and traceability
- Clear problem definition and intended use
- Model validation, verification, and reproducibility
- Continuous performance monitoring and drift detection
- Bias assessment across relevant subgroups
- Security, privacy, and access control for sensitive data
- Transparency, documentation, and auditability
- Human oversight and clear accountability
Practical steps to start now
- Treat AI as GxP software: validation protocols, change control, and release management.
- Build data lineage end-to-end: versioned datasets, feature stores, and audit logs for every transformation.
- Adopt regulated MLOps: reproducible training runs, environment capture, artifact registries, and controlled deployments.
- Document intent and risk: problem statements, risk tiering, model cards, and decision impact analysis.
- Measure what matters: clinical relevance, subgroup performance, calibration, uncertainty, and error budgets.
- Monitor in production: drift, data quality, alerting, periodic reports, and retraining criteria tied to governance gates.
- Bias and safety reviews: pre-specify metrics, mitigation plans, and thresholds appropriate to the indication.
- Security-first pipelines: PHI controls, encryption, RBAC, dependency scanning, and vendor risk reviews.
- Human-in-the-loop by design: escalation paths, overrides, and clear documentation of who is accountable for final decisions.
- Submission-ready documentation: SOPs, validation summaries, test evidence, and links to immutable audit trails.
EU strategy puts safety and scale side by side
In October, the European Commission introduced its AI in Science Strategy, including a virtual Resource for AI Science in Europe and increased use of AI models and agentic systems for pharma. Ursula von der Leyen said: "AI adoption needs to be widespread, and with these strategies, we will help speed up the process. Putting AI first also means putting safety first. We will drive this 'AI first' mindset across all our key sectors, from robotics to healthcare, energy and automotive."
Source documents and next steps
For context on current expectations, see the EMA's AI resources and the FDA's AI/ML guidance for drug development:
- EMA: Artificial Intelligence in the medicinal product lifecycle
- FDA: AI/ML-enabled drug and biological product development
If your team needs structured upskilling on MLOps, validation, and AI safety, explore curated developer tracks here: Complete AI Training - Courses by Job.
Your membership also unlocks: