Eight Essential Steps for Implementing AI in Pharmacovigilance Compliance and Practice

The CIOMS Draft Report offers practical guidance for applying AI in pharmacovigilance, aligning with EU and FDA regulations. It emphasizes human oversight, transparency, and data privacy to ensure safe, ethical AI use.

Published on: Jun 03, 2025
Eight Essential Steps for Implementing AI in Pharmacovigilance Compliance and Practice

Artificial Intelligence in Pharmacovigilance: Eight Action Items for Life Sciences Companies

The Council for International Organizations of Medical Sciences Working Group XIV (CIOMS) has released a Draft Report offering practical guidance for applying artificial intelligence (AI) in pharmacovigilance (PV). This report translates global AI regulations like the EU Artificial Intelligence Act (EU AI Act) into actionable steps tailored to PV. While the U.S. lacks overarching AI legislation, this report serves as a valuable resource for regulators and industry stakeholders shaping AI use in PV.

With the EU AI Act coming into effect in 2024, AI systems classified as high-risk—especially those in healthcare and PV—must meet strict demands for risk management, transparency, human oversight, and data protection. The European Medicines Agency (EMA) distinguishes AI systems that pose high patient risk or have significant regulatory impact, requiring case-by-case assessment within the medicinal product lifecycle.

The CIOMS Draft Report bridges these regulatory frameworks and the operational realities of PV. It complements EMA’s 2024 Reflection Paper and the FDA’s 2025 guidance on AI use in drug and biological product regulation, helping life sciences companies implement AI systems that are compliant, scientifically sound, and ethically responsible.

1. Translate Regulatory Principles into PV Practice

Regulations like the EU AI Act and FDA guidance set broad standards for high-risk AI systems. The Draft Report contextualizes these by focusing on PV-specific workflows and risks. It provides detailed risk assessment methods for AI use cases such as processing safety reports and detecting signals. Human oversight should be scaled according to the potential impact on patient safety and regulatory decisions.

2. Operationalize Human Oversight

Human oversight is mandatory for high-risk AI systems. The Draft Report outlines practical oversight models—human in the loop, on the loop, and in command—mapped to PV tasks. Life sciences companies are encouraged to implement, monitor, and adapt these models to maintain accountability and meet ethical and regulatory standards.

3. Ensure Validity, Robustness, and Continuous Monitoring

Establishing reference standards and validating AI models with real-world PV data are key. The report stresses continuous monitoring to detect performance drift or emerging risks. Addressing data quality and representativeness helps mitigate biases and maintains AI reliability as clinical data evolves.

4. Build in Transparency and Explainability

Transparency is a core requirement under the EU AI Act. The Draft Report specifies what information life sciences companies should disclose, including model architecture, inputs and outputs, and human-AI interactions. Explainable AI techniques support regulatory audits, build user trust, and facilitate error investigations. Documenting AI's role in safety data handling is also emphasized.

5. Address Data Privacy and Cross-Border Compliance Issues

Strict data privacy controls remain essential, aligned with frameworks like the EU General Data Protection Regulation (GDPR). The report highlights risks from generative AI and large language models, such as potential re-identification of anonymized data. Recommendations include de-identification, data minimization, and secure data handling protocols.

6. Promote Nondiscrimination

The EU AI Act requires avoiding discriminatory outcomes. The Draft Report advises selecting and evaluating training datasets for representativeness and applying bias mitigation strategies. This approach ensures AI systems uphold regulatory and ethical standards in PV.

7. Establish Governance and Accountability Structures

Creating cross-functional governance bodies and assigning clear roles across the AI lifecycle is critical. The report provides governance tools to document actions, manage changes, and ensure traceability, supporting both internal oversight and external inspections.

8. Comment on the Draft Report

The consultation period for the Draft Report is open until June 6, 2025. Life sciences companies should review and provide feedback to influence future AI standards in PV. Participation is particularly valuable for U.S. organizations, as the finalized guidance may inform forthcoming regulatory approaches.

For professionals interested in expanding their knowledge of AI applications in healthcare and related fields, exploring targeted training can be helpful. Consider visiting Complete AI Training for courses that cover AI fundamentals, ethical implementation, and regulatory considerations.