top of page

USFDA Guidance: Use of Artificial Intelligence To Support Regulatory Decision & Guiding Principles of Good AI Practice in Drug Development

Artificial Intelligence (AI) is no longer a future concept in pharmaceutical research—it is already influencing how drugs and biological products are discovered, developed, manufactured, and monitored throughout their lifecycle. However, when AI outputs are used to support regulatory decisions, questions of trust, transparency, and accountability become critical.


Recognising this, global regulators have begun defining expectations for the responsible use of AI in drug development. Two key publications now form the foundation of this evolving framework:

  1. The FDA draft guidance on considerations for the use of Artificial Intelligence to support regulatory decision-making for drugs and biological products, and

  2. The Guiding Principles of Good AI Practice (GxP-AI) in Drug Development, developed collaboratively by FDA, EMA, and other international regulatory partners.

Together, these documents signal a strong move toward global regulatory alignment on how AI-generated evidence should be developed, validated, and maintained.


In January 2026, FDA and EMA—together with other international regulatory authorities—published the Guiding Principles of Good AI Practice (GxP-AI) in Drug Development. This document represents a significant milestone, as it reflects shared global regulatory expectations rather than a single-agency viewpoint.

The principles define AI as system-level technologies used to generate or analyse evidence across the entire drug lifecycle, including nonclinical, clinical, manufacturing, and post-marketing phases. Both FDA and EMA stress that AI must be human-centric, ethically developed, and risk-based, with safeguards proportionate to the context in which the AI is used.

Key themes include strong data governance, multidisciplinary expertise, robust model development practices, and clear communication of AI system limitations. The principles also highlight the need for continuous monitoring and re-assessment, recognising that AI systems must remain reliable as data and use conditions evolve.


By jointly endorsing these principles, FDA and EMA signal their intent to harmonise expectations internationally, supporting innovation while maintaining regulatory rigor.


The FDA’s draft guidance, published in January 2025, focuses on situations where AI models are used to generate information that directly informs regulatory decisions for drugs and biological products. Importantly, the guidance does not apply to AI tools used only for internal efficiency, such as document drafting or project management.

A central concept in this guidance is credibility—defined as the level of trust that can be placed in an AI model’s output for a specific context of use. FDA introduces a risk-based credibility framework, which asks sponsors to clearly define what question the AI is answering, how influential the AI output is in the overall decision, and what the consequences would be if the output were incorrect.

The higher the potential impact on patient safety or product quality, the greater the expectation for validation, documentation, and oversight. FDA’s approach makes it clear that AI models are not evaluated in isolation; instead, their role within the broader regulatory decision-making process is key.

Transparency, Data Quality, and Lifecycle Oversight

FDA places strong emphasis on data governance and transparency. Sponsors are expected to document data sources, data processing steps, model assumptions, and performance metrics in a way that allows regulators to understand and evaluate the reliability of AI outputs.

Equally important is lifecycle management. AI models can degrade over time due to data drift or changes in operating conditions. FDA therefore expects ongoing monitoring, periodic re-evaluation, and integration of AI oversight into existing pharmaceutical quality systems. This lifecycle-focused thinking aligns closely with established principles such as ICH Q10 and Q12.


For pharmaceutical and biotech companies, success with AI will depend not only on technical performance, but also on transparency, governance, lifecycle oversight, and early regulatory engagement. Aligning with both FDA and EMA expectations will be essential as AI becomes an integral part of modern drug development.



Comments


I Sometimes Send Newsletters

Thanks for submitting!

  • LinkedIn
  • Facebook
  • Twitter
  • Instagram

DISCLAIMER

The views expressed in this publication do not necessarily reflect the views of any guidance of government, health authority, it's purely my understanding. This Blog/Web Site is made available by a regulatory professional, is for educational purposes only as well as to give you general information and a general understanding of the pharmaceutical regulations, and not to provide specific regulatory advice. By using this blog site you understand that there is no client relationship between you and the Blog/Web Site publisher. The Blog/Web Site should not be used as a substitute for competent pharma regulatory advice and you should discuss from an authenticated regulatory professional in your state.  We have made every reasonable effort to present accurate information on our website; however, we are not responsible for any of the results you experience while visiting our website and request to use official websites.

bottom of page