19,000 variations, yes-or-no answers: Inside VerifAI's path to consistent pharma reviews

BfArM and LORENZ show how AI handles routine pre-checks with clear, auditable rules, cutting variance and rework. Legal teams get predictable outcomes and stronger evidence trails.

Categorized in: AI News Legal
Published on: Feb 11, 2026
19,000 variations, yes-or-no answers: Inside VerifAI's path to consistent pharma reviews

AI streamlines regulatory content review: insights from BfArM and LORENZ

At the Global Pharmaceutical Regulatory Affairs Summit, Harald von Aschen (BfArM) and Jen Heller (LORENZ Life Sciences) laid out how AI is taking repetitive pre-checks off regulators' plates while keeping decisions predictable and auditable. For legal teams, the headline is simple: less subjectivity, clearer rules, tighter evidence trails.

At a glance

  • BfArM handles ~19,000 Type IA variations annually and uses AI to automate routine validation checks.
  • LORENZ's VerifAI pilot runs on Meta LLaMA 3 with cascaded prompting to curb hallucinations.
  • Trust is built through explicit rules, boundaries, and deterministic yes-or-no outcomes.

Why this matters for Legal

Volume and variance create legal risk. Thousands of pages per submission and multiple variations per product make manual cross-referencing error-prone. Inconsistent outcomes are hard to defend and expensive to remediate.

The pilot emphasizes rule transparency, traceability, and clear pass/fail logic. That maps directly to defensibility in audits, fair treatment across applicants, and cleaner evidence for disputes or inspections.

Where AI is applied

VerifAI automates pre-checks on electronic application forms (EMA web-based forms in FHIR format). The system extracts structured XML fields and applies rule-based validations to confirm completeness and internal consistency before deeper review. Reference: EMA electronic application forms and HL7 FHIR.

Today's scope is structured-field checks. The roadmap adds cross-document verification and advanced reasoning-linking data across sequences and modules (e.g., eCTD) to surface conflicts earlier and reduce rework.

How the pilot works

  • Model: Meta LLaMA 3 hosted on AWS; multilingual to handle Europe's submissions.
  • Input: Structured fields (XML) extracted from EMA web forms in FHIR format.
  • Logic: Rule-based validations with deterministic outcomes; results are explainable.
  • Cascaded prompting: If the first prompt underperforms, a second prompt refines the check, minimizing hallucinations and staying within context limits.
  • Interface: A GUI for upload, review, alerts, and direct navigation to where each rule was applied.
  • Data sources: Option to connect external sets (e.g., RMS) to enrich validation.

The pain points it targets

  • High volume and complexity across products, groupings, and dependencies.
  • Repetitive manual checks that invite fatigue and inconsistency.
  • Redundancy, including revalidating information already confirmed elsewhere (e.g., by EMA).
  • Resource constraints that make proportional staffing increases unrealistic.

Opportunities for agencies and sponsors

  • Efficiency: Automate routine validations to free reviewers for edge cases.
  • Consistency: Standardize outcomes; reduce reviewer-to-reviewer variance.
  • Focus on value: Shift expert time to complex assessments and risk signals.
  • Scalability: Handle more variations without swelling headcount.
  • Global relevance: Apply consistent checks aligned to eCTD and regional standards.

Governance and trust: the legal lens

Heller's framing is blunt: set boundaries so the model delivers clear yes-or-no results. No gray zones. That means policies first, automation second.

The team enforces process granularity and traceability-what the system did, what rules fired, and why the outcome was reached. Incremental rollout matters: early wins come fast, but the last mile takes careful tuning to make results reliable and repeatable under scrutiny.

Practical checklist for legal and compliance teams

  • Codify rule sources: cite statutes, guidance, and internal SOPs for each validation.
  • Require explainability: for every flag, store the triggering rule, evidence, and timestamp.
  • Set decision boundaries: define what remains human-only (e.g., scientific judgment, benefit-risk).
  • Control data flows: confirm data residency, access controls, and logging on all connected sources (e.g., RMS, internal registries).
  • Change management: version rules, prompts, and models; document approvals and rollbacks.
  • Quality and MRM: align to model risk management, bias checks, and performance thresholds with periodic revalidation.
  • Incident playbook: define escalation paths for false positives/negatives and applicant disputes.
  • Retention and audit: preserve artifacts for the full regulatory retention period.

What's next

Von Aschen pointed to cross-document checks and deeper reasoning as the next leap. The guardrails stay: strict rules, narrow scope per step, and transparent outputs. The goal is simple-make outcomes predictable, enforceable, and easy to audit.

If your organization is building similar capabilities, align legal, regulatory, and IT early. Decide what must be deterministic, what needs human judgment, and what proof you'll keep for auditors and courts.

Further learning: If you're planning team upskilling for AI in regulated work, see our role-based learning paths: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)