The EU AI Act is now law. The Act introduces different timelines for different parts of the AI regulatory regime in the EU. The parts of the Act which regulate Prohibited AI and which mandate AI literacy are now in force.
You can find out more about AI literacy at AI Literacy Obligations. You can also read our FAQs on the EU AI Act at The EU Artificial Intelligence Act.
The EU AI Act’s rules on general purpose AI (GPAI) models started applying from 2 August 2025.
Under the EU AI Act, GPAI models have a slightly different regulatory regime. For GPAI, the European Commission, and not individual Member States, has the sole authority to oversee and enforce rules related to GPAI Models. The new GPAI Code of Practice will help organisations using GPAI develop their thinking on what’s required.
Who does this apply to? Who is obliged to follow this Code of Practice?
This Code of Practice (which we will refer to as the Code) is voluntary, so in a sense no one is mandated to sign and follow this Code. However, one of the main aims of the Code is to reduce the administrative burden of compliance, so it is definitely worth considering. As we’ve said, it also gives a clear indication on the European Commission’s thinking in this area.
What kinds of organisations should think about signing the Code?
Providers of general-purpose AI models.
What is the purpose of this Code?
For some background, plans for this Code were introduced in Article 56 of the EU AI Act, which came into force on 1 August 2024. Under the EU AI Act, GPAI models are subject to specific obligations that are separate from the obligations on “AI systems”.
The purpose of this Code is to help providers of GPAI models comply with their obligations under the EU AI Act.
To be clear, this Code does not impose extra obligations on top of the ones in the EU AI Act. It is meant to provide help providers meet their obligations under the EU AI Act. Enforcement will also focus on monitoring their adherence to the Code which can make compliance more predictable and more administratively streamlined, with more legal certainty compared to other methods of compliance. For example, the Code offers structured ways to present documentation.
This Code has three chapters:
- Transparency: relevant to all GPAI providers
- Copyright: relevant to all GPAI providers
- Safety and Security: relevant to a providers of more advanced models, the kinds of GPAI models with systemic risk.
1. Transparency
This chapter is mainly concerned with Article 53 (1) (a) and (b) of the EU AI Act, which in general terms requires providers of GPAI models to draw up and keep up to date technical documentations of their model, with specific details to include such as technical means, design specifications, data training information and known or estimated energy consumption.
This chapter includes a Model Documentation Form, which signatories might find a useful tool to help them document and present the required information in a predictable and structured way. GPAI providers can then provide this Model Documentation Form and other necessary information to the AI Office and downstream providers when requested.
Downstream providers are the providers of AI systems (not GPAI models) who integrate the GPAI model into their AI system.
2. Copyright
The chapter relates to the EU AI Act Article 53(c), which requires GPAI providers to put in place a policy to comply with copyright law and related rights.
In this chapter, signatories commit to certain measures such as:
- When web crawling, to employ web crawlers that read and follow certain instructions specified in the Internet Engineering Task Force RFC, and to enable affected rightsholders to obtain information about the web crawlers used.
- Implementing appropriate and proportionate technical safeguards to prevent their models from generating outputs that reproduce copyright and IP infringing training content.
- Prohibiting copyright-infringing uses of a model in their acceptable use policy, terms and conditions, or other equivalent documents.
- For free GPAI models, to alert users to the prohibition of copyright infringing uses.
- Designating a point of contact and enabling the lodging of complaints.
3. Systemic risks
Before looking at the Safety and Security Chapter, it’s probably helpful to consider: When do GPAI models present systemic risks?
When the cumulative amount of compute used for its training is greater than 10^25 floating point operations (FLOPs). However, this threshold is still open for update by the EC based on technological advances. It can be hard to work out whether this threshold has been met but, as an illustration, some estimates says that an equivalent spend on computing power for 10^25 FLOPs in current computing prices might be around $6.94m. So, this is likely to be high-end computing operations.
Broadly speaking, systemic risk can be thought of as the risk for wide-spread and high impact negative effects on things like public health, fundamental rights, and society as a whole.
Safety and Security Chapter
This chapter applies to Article 55 of the EU AI Act, which contains the specific obligations on providers of GPAI models that are considered to have “systemic risk”. In general, this chapter seems to contain the most detailed instructions. It also contains its own glossary.
An overview of some of signatories’ commitments are:
- Adopt a state-of-the-art Safety and Security Framework, with a detailed three step adoption process.
- Identifying systemic risks stemming from the model, following a detailed process outlined in the Code, with specific risk scenarios.
- Analysing each identified systemic risk based on five elements detailed in the Code.
- Specifying systemic risk acceptance criteria and determining whether the risks are acceptable.
- Implementing appropriate safety mitigations along the entire model lifecycle, with this Code containing some examples of mitigations.
- Implementing adequate levels of cybersecurity protections from their models and physical infrastructure along the entire model lifecycle.
- Prior to placing a model on the market, reporting information about their model and the systemic risk assessments and mitigation processes to the EU AI Office, by creating a Safety and Security Model Report. Also, keeping that report up to date. The Code contains a detailed list of information to include.
- Implementing processes and measure for Serious Incident Reporting.
What happens next?
The EU AI Act’s rules on GPAI entered into application on 2 August 2025, and these rules will become enforceable by the European Commission’s AI Office in 1 year for new models and in 2 years for existing models. This means that if you have placed a model on the market before 2 August 2025, the model must comply with the EU AI Act’s rules by 2 August 2027.
Now that the Code has been published, Member States and the European Commission will be assessing the adequacy of the Code. The European Commission website contains a download link to the signatory form and the email to which signatories should send the completed form. The page also shows the list of signatories, which includes some high-profile tech companies. However, there have been some notable and public absences from this list.
The European Commission has also published guidelines looking at the scope of obligations for GPAI models. It provides more details on when a model is considered a GPAI model and who could be considered a “provider”.
For further information:
Visit our Artificial Intelligence (AI) page to learn more about Punter Southall Law’s AI practice and some of the recent projects we have handled.
If you require legal advice on AI, Contact Us to arrange a consultation.