European Commission: Guidelines on the Scope of the Obligations for General-Purpose AI Models established by AI Act
- Sharan Murugan

- Jul 19
- 3 min read
On 18 July 2025, the European Commission published detailed Guidelines on the scope of the obligations for providers of general-purpose AI (GPAI) models under the AI Act (Regulation EU 2024/1689). These guidelines were issued just ahead of the AI Act’s key compliance date, 2 August 2025, and aim to help AI developers, downstream providers, and especially industries like pharmaceuticals, understand what is required when using or placing AI models on the EU market.

A General-Purpose AI (GPAI) model is defined as an AI model capable of performing a wide range of distinct tasks, typically trained on vast datasets using self-supervised learning at scale.Examples include large language models (LLMs), generative image and video models, and multimodal AI systems.
These models can be embedded into downstream systems like clinical decision support tools, digital health apps, or automated data analysis platforms used in pharma.
The AI Act introduces obligations to ensure transparency, accountability, and risk mitigation for GPAI models. Key points from the guidelines:
✅ Providers must maintain technical documentation describing the training process, data, and model capabilities.
✅ Providers must provide information to downstream users (e.g., pharma companies integrating these models into clinical trial systems).
✅ Providers must develop a copyright compliance policy and publicly share a summary of training data.
Who Is a Provider? What Triggers Obligations?
According to the AI Act:
Provider: Any actor who develops or has a GPAI model developed and places it on the EU market, under their own name or trademark, for free or for payment.
Placing on the market: Making the model available on the EU market for the first time, via API, download, cloud service, etc.—regardless of the provider’s global location
Some GPAI models may have systemic risk, meaning they could significantly impact public health, safety, or fundamental rights.
A model is presumed to have systemic risk if its training used over 10²⁵ FLOP (floating-point operations). These high-impact models face extra obligations, including:
Continuous risk assessments and mitigation.
Robust cybersecurity measures.
Incident reporting to the EU AI Office.
The guidelines explain that GPAI models released under a free and open-source licence can be exempt from some transparency requirements, but only if:
They allow free use, modification, and redistribution.
The model’s parameters, weights, architecture, and usage documentation are publicly available.
There’s no monetisation of access or use.
These exemptions do not apply to open-source models with systemic risk.
Transitional Compliance and Enforcement Timelines
August 2, 2025: Obligations take effect for newly placed GPAI models
August 2, 2027: Models placed on the market before August 2025 must achieve full compliance
Enforcement powers (including fines): Begin August 2, 2026; the European Artificial Intelligence Office will supervise implementation and enforcement
For pharmaceutical companies, these guidelines have direct and practical implications:
If your teams build or deploy GPAI models for drug discovery, trial optimisation, safety signal detection, or patient support, you must understand if the model qualifies as GPAI and whether it is considered high risk.
If you modify or fine-tune large GPAI models (for example, adapting a public LLM to your proprietary biomedical dataset), you could become the provider, with obligations under the AI Act.
For compliance, you will need to keep clear technical documentation, risk assessments, and summaries of training data, and possibly set up internal processes to monitor ongoing systemic risks.
The guidelines also help pharma teams choose the right AI partners: upstream providers who meet EU requirements, share documentation, and comply with systemic risk obligations.
References:



Comments