In my previous post, I explored how AI is transforming pharmacovigilance and highlighted use cases that go beyond case intake. Building on that discussion, I wanted to understand how organizations are validating AI applications and what regulatory guidelines say about it.

While numerous frameworks exist-issued by bodies such as the FDA, WHO, CIOMS Working Groups, EMA, and Health Canada-I focused my review on two key documents:

  • The EU AI Act
  • EudraLex - Volume 4 - Good Manufacturing Practice (GMP) guidelines (draft)

These are highly relevant to pharmacovigilance systems and encapsulate most of the critical points from other guidance documents. Although the EudraLex draft is still under review, it offers a clear picture of regulatory expectations. I will cover the FDA regulations in next post.


Validating AI in Pharmacovigilance

Background: EudraLex Draft Guidelines

On July 7, the European Commission published three new draft guidelines under EudraLex - Volume 4, aimed at guiding the use and validation of AI in regulated applications. Here are the main considerations for pharmacovigilance.


Documentation

Organizations should keep comprehensive records of new technologies, services, or solutions, define system requirements, and perform risk-based evaluations in line with the ALCOA+ principles: Attributable - Legible - Contemporaneous - Original - Accurate - Complete - Consistent - Enduring - Available - Traceable.

In practice, this means:

  • Establishing a data governance system that covers the entire data lifecycle-from creation and processing to verification, decision-making, retention, archiving, retrieval, and destruction.
  • Periodically reviewing and updating documents to reflect current risks.
  • Ensuring any automation (scripts, AI models, or hosted services) is integrated into the quality systems.

System Validation

The same validation principles that apply to traditional computerized systems also apply to AI/ML, but with additional requirements to address AI-specific challenges:

  • Validate AI model and system before deployment.
  • Continuous risk management to monitor for bias in training data, incorrect predictions, regulatory changes, model drift, and performance degradation.
  • Dataset lineage documentation, detailing where data came from, how it was processed, and why it is suitable.
  • For non-deterministic models, considering bias detection, performance variability, and probabilistic error handling.

Key Shifts in Validation Strategy

While the foundation is similar to traditional validation, AI introduces new expectations:

  • Explainability & Transparency - AI should be explainable enough for regulators, QA teams, and users to understand how decisions are made and what inputs lead to outputs. Systems should log a confidence score for each prediction and flag low-confidence outputs as ‘undecided’.
  • Human Oversight - All critical AI-driven decisions affecting GxP data, patient safety, or product quality must be reviewed by qualified personnel, following a human-in-the-loop approach.
  • Change Control - Any update to the model, training dataset, or configuration should be fully traceable and followed by revalidation under a formal change control process.
  • Vendor Oversight - Contracts with external vendors should clearly define data ownership, dataset access, and responsibilities for model maintenance and validation.

Validation for Non-Deterministic Models

For non-deterministic systems, simple pass/fail testing is insufficient. Instead, organizations should:

  • Use statistical validation techniques.
  • Define performance ranges and acceptable error thresholds.

This approach acknowledges that outputs may vary, but ensures they remain within controlled and documented limits.

Note: Annex 22 of the EudraLex draft applies only to static models. However, it is still worth reviewing for insights into testing strategies for AI-enabled pharmacovigilance systems. Read Annex 22 here.


If you have any comments, feedback, or requests, please feel free to connect with me on LinkedIn. And if you liked this post, don’t forget to share it with your network!