FDA and EMA release a joint AI framework for drug development: what IT and dev teams need to know
US and European regulators have published a shared guidance to set a common standard for how AI is built and used across the drug development lifecycle. The goal: raise quality, keep humans in the loop, and meet ethical, legal, scientific, regulatory, and cybersecurity expectations from day one.
The document outlines key principles meant to guide long-term adoption. For teams building, validating, and integrating AI into GxP workflows, this reads like a blueprint for responsible MLOps in pharma.
What's in the guidance
- Quality, human-centric, compliant by design: Models should meet clear scientific and regulatory expectations and keep human oversight front and center.
- Data lineage and traceability: Maintain detailed records of data sources, preprocessing, labeling, and transformations to support GxP and audits.
- Defined role and scope: Specify where AI is used, how it aids decisions, and who is accountable. Don't let models "quietly" expand their use.
- Fit-for-use data and clear outputs: Ensure inputs match context-of-use and present outputs that are digestible, accessible, and relevant to end users.
- Risk-based validation: Evaluate the full system-including human-AI interactions-using metrics tied to the intended context. Test on representative, shift-aware datasets.
- Lifecycle monitoring: Reassess performance on a regular cadence. Track drift, bias, and failure modes. Troubleshoot with documented change control.
The agencies' drug and biologics teams contributed to the guidance to help industry capture AI's upside while protecting patients and product quality.
Why this matters for IT and development
If you build or integrate AI in regulated environments, this sets the bar for engineering, documentation, and oversight. It's also a practical checklist for teams modernizing pipelines without creating audit risk.
- Scope first: Write a short "context-of-use" for every AI feature. Define inputs, outputs, users, guardrails, and boundaries. Assign an owner.
- Data governance: Keep a data inventory with lineage, consent, and retention rules. Version datasets and transformations like code. Lock down PII.
- Reproducibility: Containerize training and inference. Version models, weights, features, prompts, and configs. Capture seeds and environment details.
- Documentation: Ship model cards and data sheets. Include known limitations, monitoring plans, and rollback criteria. Keep an audit trail.
- Validation: Define acceptance criteria tied to clinical or operational impact. Test across edge cases, shifts, and human-in-the-loop scenarios.
- Risk management: Run FMEA or similar. Track bias, outliers, and failure triggers. Predefine rollback and CAPA paths.
- MLOps with gates: Add approvals to your CI/CD for models. Use feature stores, lineage tags, and environment promotion rules.
- Monitoring: Build dashboards for drift, calibration, and decision quality. Set alerts, playbooks, and revalidation schedules.
- Security and privacy: Threat-model data pipelines and inference endpoints. Enforce least privilege, key management, and dependency vetting.
- Human factors: Make outputs explainable at the right level for users. Clarify confidence, uncertainty, and next steps.
Industry momentum you should track
AI has already supported development or repurposing of 3,000+ drugs as of November 2024. Approaches vary: in-house models, external partnerships, or hybrids.
- GSK's collaborations aim to strengthen early R&D as key patents expire.
- Eli Lilly and NVIDIA are building a high-performance computing stack and an AI co-innovation lab for future programs.
- AstraZeneca is acquiring Modella AI to speed oncology development after an initial 2025 research agreement.
Investor interest is strong, with venture financing deals involving AI up more than 400% between 2014 and 2024.
How to act this quarter
- Run a quick gap assessment against the principles above for your current models and tools.
- Stand up a lightweight AI governance board with clear intake, review, and approval steps.
- Add data lineage and model cards to your definition of done. No artifact, no release.
- Instrument drift and bias monitoring before scaling to production teams.
Helpful references
For background, see the FDA's guidance on AI/ML in drug and biologic development (FDA guidance) and EMA's work on AI in medicines (EMA reflection paper).
If your team is upskilling on MLOps, validation, and AI safety, explore AI courses by job.
Your membership also unlocks: