Practicle guide to implement xplainability technique for Transparency and Explainability in PV

Artificial intelligence is rapidly modernising pharmacovigilance (PV). It now offers powerful capabilities for processing Individual Case Safety Reports (ICSRs), detecting safety signals, and predicting adverse drug reactions. However, as AI systems become more sophisticated, many rely on complex “black-box” models. These models can produce highly accurate results, but their internal reasoning is often difficult for humans to interpret. This raises an important question: How can PV professionals and regulators trust decisions they cannot clearly understand?...

March 11, 2026 · 7 min · Kunal

AI Governance, Compliance, and Risk Management in Pharmacovigilance

Pharmacovigilance has long sat at the heart of patient safety, traditionally relying on structured processes and conservative risk management to detect and prevent adverse drug reactions. Today, Artificial Intelligence is fundamentally transforming PV operations-from automated case intake and triage using Natural Language Processing to machine learning algorithms identifying patterns across massive safety datasets. However, introducing dynamic, learning AI systems into a highly regulated environment raises a critical challenge: How do we govern and validate these systems while maintaining compliance and patient protection?...

March 3, 2026 · 5 min · Kunal

Validating AI System in Pharmacovigilance: Key Insights on Validation and Credibility

In my previous post, I explored the European Commission guidelines and the EU AI Act, looking at how they influence AI applications and the validation strategies surrounding them. This time, I want to shift focus to the FDA’s draft guidance on AI, released in January 2025. Although still in draft form, the document is an important signal of how the FDA is approaching AI, and it helps us understand what kinds of validation methodologies are expected to align with global regulatory thinking....

August 17, 2025 · 4 min · Kunal

Validating AI System in Pharmacovigilance: Insights from the EU AI Act and EudraLex Draft Guidelines

In my previous post, I explored how AI is transforming pharmacovigilance and highlighted use cases that go beyond case intake. Building on that discussion, I wanted to understand how organizations are validating AI applications and what regulatory guidelines say about it. While numerous frameworks exist-issued by bodies such as the FDA, WHO, CIOMS Working Groups, EMA, and Health Canada-I focused my review on two key documents: The EU AI Act EudraLex - Volume 4 - Good Manufacturing Practice (GMP) guidelines (draft) These are highly relevant to pharmacovigilance systems and encapsulate most of the critical points from other guidance documents....

August 15, 2025 · 3 min · Kunal

How AI is Shaking Up Pharmacovigilance

For the longest time, companies stayed away from trying out new tech stacks in pharmacovigilance (PV). But with the rise of generative AI, things are changing-fast. Big players like Bayer teamed up with Genpact to use AI for automating case intake. Sanofi joined hands with IQVIA to apply AI for end-to-end case processing. And these are just a couple of examples. As case volumes keep going up year after year, everyone’s now in a race to bring AI into the game-to cut costs, boost quality, speed up case processing, and let safety teams focus on what really matters....

May 12, 2025 · 4 min · Kunal