EMA and FDA issue joint AI guiding principles for drug developers
On 14 January 2026, the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) published a common set of 10 guiding principles for using AI across the drug product lifecycle. The message to sponsors is clear: use AI, but do it with a risk-based, lifecycle mindset and align to modern standards.
The agencies point to AI's potential to speed development, improve pharmacovigilance, and cut animal testing by better predicting toxicity and efficacy in humans. As they put it, these principles are meant to "inform, enhance, and promote the use of AI for generating evidence across all phases of the drug product life cycle."
Why this matters for product development teams
- Sets clear expectations for AI in R&D, clinical development, CMC, and postmarket surveillance.
- Elevates governance: documentation, validation, and oversight need to match the risk and context of use.
- Signals convergence across jurisdictions, reducing guesswork for global programs.
- Backs a practical outcome: better decision quality, fewer delays, and less reliance on animal models.
The 10 principles, translated into practice
- Human-centric and ethical design: Define the human benefit, set guardrails for fairness, and document who the model serves and who it could impact.
- Clear context of use: State precisely where AI is used (e.g., endpoint selection, site feasibility, dose finding, signal detection) and what decisions it informs.
- Risk-based approach: Calibrate validation depth, risk controls, and human oversight to the potential patient, trial, or quality impact.
- Data governance and documentation: Track lineage, consent, quality checks, and access; keep model cards and data sheets up to date.
- Standards and compliance: Align with current legal, scientific, and regulatory standards; adopt consensus technical standards as they mature.
- Multidisciplinary development: Involve clinical, biostatistics, data science, quality, pharmacovigilance, and software engineering-end to end.
- Sound model and system engineering: Use proven ML/SE practices: versioning, reproducibility, testing, security, and change control.
- Risk-based performance assessment: Validate for the stated context; stress-test for bias, drift, and failure modes; define acceptance criteria up front.
- Lifecycle management: Monitor in production, retrain with controls, revalidate after material changes, and retire models responsibly.
- Plain-language transparency: Provide essential information in clear terms to investigators, reviewers, and end users-no black boxes.
What to do next (practical 90-day plan)
- Inventory AI use: Catalog current and proposed AI use cases by domain (preclinical, clinical, CMC, safety) and classify by risk and GxP relevance.
- Assign ownership: Name an accountable AI product owner and a cross-functional review board with quality and clinical representation.
- Define context of use: For each model, document objective, decision impact, users, inputs/outputs, constraints, and human-in-the-loop points.
- Set documentation baselines: Create model cards, data sheets, validation plans, and monitoring plans; standardize templates.
- Map to standards: Align controls to applicable regulations and consensus standards; note gaps and remediation timelines.
- Validation and monitoring: Establish acceptance criteria, bias and drift checks, stability testing, and ongoing performance dashboards.
- Change control: Implement versioning, approval gates for retraining, and triggers for revalidation.
- Vendor oversight: Require transparency, auditability, and update cadence from AI vendors; include right-to-audit clauses.
- Security and privacy: Protect datasets, model artifacts, and prompts; minimize PII and apply least-privilege access.
- Training and communication: Give teams plain-language guidance on AI use, limits, and escalation paths.
Alignment with prior efforts
The principles echo good machine learning practice themes seen across international regulators and standards groups. Expect continuing collaboration on research, education, and consensus standards to translate these principles into more detailed guidance and review expectations.
Bottom line
AI in drug development is welcome-if governed. Treat every AI use case like a product: define context, right-size risk controls, document decisions, and manage it across its lifecycle. Teams that operationalize these basics will move faster with fewer surprises in review.
If your team is building skills for AI governance and validation, explore role-based learning paths here: AI courses by job.
Your membership also unlocks: