top of page

CIOMS Working Group Draft Report: Artificial Intelligence in Pharmacovigilance: Key Takeaways

The field of pharmacovigilance (PV)—the science of detecting, assessing, understanding, and preventing adverse effects or any other drug-related problems—is undergoing a digital transformation. At the heart of this evolution is Artificial Intelligence (AI), a technology that promises to streamline PV operations, enhance drug safety monitoring, and ensure timely decision-making. The CIOMS Working Group XIV Draft Report (1 May 2025) offers a landmark framework for integrating AI into PV systems, outlining critical principles, applications, and future directions.


About CIOMS: Founded in 1949 by the World Health Organization and UNESCO, The Council for International Organizations of Medical Sciences (CIOMS) is a leading international non-profit organization dedicated to advancing public health through guidance on medical product development and safety


AI is defined in the report as a machine-based system that can infer patterns or decisions from data inputs to produce outputs like recommendations, classifications, or predictions. In PV, this can mean faster detection of adverse drug reactions (ADRs), efficient Individual Case Safety Report (ICSR) handling, and enhanced risk signal detection using massive data sources such as electronic health records (EHRs) and real-world data (RWD).

Traditional PV processes are under immense pressure due to rising case volumes, global regulatory complexity, and public expectations for real-time safety surveillance. AI—especially machine learning (ML) and generative AI (GenAI)—offers scalable solutions, enabling predictive, preventive, and personalized pharmacovigilance.

The report centers around seven fundamental principles critical to the responsible deployment of AI in PV:

1. Risk-Based Approach

AI systems must be evaluated for the risk they pose based on their use case and impact. Systems making autonomous decisions in safety-critical tasks (e.g., signal detection) must undergo more stringent oversight. Risks should be continuously reassessed as AI evolves.

2. Human Oversight

Models should be operated with “Human-in-the-loop (HITL)” or “Human-on-the-loop (HOTL)” frameworks to ensure accountability. Human oversight is essential in defining performance thresholds and mitigating AI-induced errors.

3. Validity & Robustness

AI solutions in PV must demonstrate reliability through rigorous performance evaluations under real-world conditions. Models should be tested for generalizability, bias, and representativeness using diverse datasets.

4. Transparency

Stakeholders must be informed when and how AI is used. This includes disclosing model types, performance benchmarks, and decision-making logic. Explainable AI (xAI) methods help understand the inner workings of "black-box" models and ensure trustworthiness.

5. Data Privacy

Given the sensitivity of health data, AI models must comply with data privacy regulations like GDPR or HIPAA. GenAI poses particular risks due to its ability to potentially re-identify anonymized data, making ethical handling paramount.

6. Fairness & Equity

AI systems must be free from bias that could lead to unequal treatment of subpopulations. Developers are encouraged to use representative datasets and perform subgroup analyses to ensure equitable outcomes.

7. Governance & Accountability

A structured governance framework is needed to maintain trust and regulatory compliance. Clear role definitions and change management protocols ensure accountability throughout the AI lifecycle.


The report documents several use cases and real-world implementations:

  • Duplicate ICSR detection using AI in systems like FAERS and EudraVigilance.

  • Automated triaging of safety reports to prioritize human review.

  • Natural Language Processing (NLP) for summarizing case narratives or identifying adverse events from social media and scientific literature.

  • Generative AI applications such as AI-generated follow-up letters, SQL queries for safety databases, and summarization of regulatory documents.

These examples illustrate the growing breadth of AI’s potential to revolutionize PV across the entire lifecycle—from adverse event reporting to regulatory submissions.


Regulatory and Global Perspectives:

The CIOMS report situates its recommendations within a global regulatory context:

  • EMA’s Reflection Paper on AI (2024) advocates a risk-based approach and encourages early dialogue with regulators for AI tools with high regulatory impact.

  • FDA’s Draft Guidance (2025) outlines a credibility assessment framework for AI used in regulatory decision-making.

  • WHO, OECD, and Health Canada have all released ethical guidelines for AI use in health.

These frameworks align with the CIOMS principles and highlight the international push toward standardized, safe, and effective AI use in PV.


The roadmap set out in this report not only reflects today’s best practices but also prepares the industry for tomorrow’s challenges, positioning pharmacovigilance at the forefront of ethical and technological innovation.


and

Visit: https://cioms.ch/working_groups/working-group-xiv-artificial-intelligence-in-pharmacovigilance for more details and a public consultation form (open until 6 June 2025).

Comments


I Sometimes Send Newsletters

Thanks for submitting!

  • LinkedIn
  • Facebook
  • Twitter
  • Instagram

DISCLAIMER

The views expressed in this publication do not necessarily reflect the views of any guidance of government, health authority, it's purely my understanding. This Blog/Web Site is made available by a regulatory professional, is for educational purposes only as well as to give you general information and a general understanding of the pharmaceutical regulations, and not to provide specific regulatory advice. By using this blog site you understand that there is no client relationship between you and the Blog/Web Site publisher. The Blog/Web Site should not be used as a substitute for competent pharma regulatory advice and you should discuss from an authenticated regulatory professional in your state.  We have made every reasonable effort to present accurate information on our website; however, we are not responsible for any of the results you experience while visiting our website and request to use official websites.

bottom of page