EU AI Act in Action: First Draft General-Purpose AI Code of Practice
On 14 November 2024, the European Union’s AI Office published the first draft of the General-Purpose AI Code of Practice (the Code). The purpose of the Code is to help providers (i.e., developers) of general-purpose AI models (GPAI) effectively comply with their obligations under the EU AI Act, which will apply from 2 August 2025. The drafters of the Code have in general set out to provide a tool which is “future-proof” and has global significance, as the first detailed rule-setting document for GPAI models. The Code will address key areas on which four Working Groups, led by renowned global experts in AI, are focusing in respect of GPAI models: (i) transparency and copyright-related rules; (ii) risk identification and assessment for systemic risk; (iii) technical risk mitigation for systemic risk; and (iv) governance risk mitigation for systemic risk. Adherence to the Code, albeit not compulsory, is likely to be very influential in demonstrating compliance with the EU AI Act in relation to the obligations concerning GPAI models. The publication of the first draft of the Code, with three more drafting rounds to come in the next five months, follows the recent consultation process led by the AI Office, to which nearly 430 responses were submitted (see details in our previous alert). The AI Office will enforce the obligations for GPAI model providers under the EU AI Act, and has wide powers, including to impose fines in relation to GPAI models of up to 3% of global annual turnover or 15 million Euros, whichever is higher.
The Chairs and Vice-Chairs of the four Working Groups describe this first draft as a “foundation for further refinement”. The draft contains a number of “Open Questions”, some of which we have highlighted below, on which the relevant stakeholders are invited to provide views and comments by 28 November 2024. The drafters in general have reportedly aimed at making the Code “a flexible, targeted, and effective framework”, and it seems that effort to achieve that has been put into the draft. Business should carefully watch whether the final version of the Code arrives at an appropriate balance between sufficient further concretisation of steps to be implemented by companies across the AI value chain (i.e. supply chain) and the proclaimed desire to maintain flexibility.
Limited initial guidance, including on when an entity is a “provider” of GPAI following fine-tuning
The first draft of the Code does not include the granularity required to implement any operational changes yet, but it is nevertheless a welcome development, including because it has acknowledged where further guidance will be necessary.
For example, the Q&A accompanying the draft Code confirms that the AI Office intends to provide further clarifications on the specific circumstances in which a downstream entity that fine-tunes or modifies an existing GPAI model may become a “provider” of a new model, and hence subject to the extensive obligations on such providers under the EU AI Act. The AI Office acknowledges that this is “a difficult question with potentially large economic implications”. It further explains that in any case, if an entity becomes a “provider” following fine-tuning, its obligations as a provider will only be limited to the fine-tuning, for example by complementing the already existing technical documentation with information on the modifications. In that context, the drafters of the Code in general have also reportedly confirmed that companies that build AI applications using another provider’s large GPAI model as a foundation will likely not have to bear the entire regulatory burden.
In addition, the Q&A confirms that the Code will provide further detail on what the relevant obligations imply for different ways of releasing GPAI models, including open-sourcing, as the Code acknowledges “the positive impact that open-source models have had on the development of the AI safety ecosystem”.
Key proposals in the draft Code: a rough sense of direction
The provisions of the draft Code adopt a high-level approach, with broad measures and (relatively) more specific sub-measures set out. The Code explains that there will be more detailed Key Performance Indicators (KPIs), which are to be set out in subsequent drafts.
Key proposals include:
- Transparency: The draft Code details the type of information records providers of GPAI models must keep (and be prepared to provide to the AI Office and/or to downstream providers on request) in order to comply with their transparency obligations under Articles 53(1)(a) and (b) of the EU AI Act. Examples of such records include an up-to-date Acceptable Use Policy, information on data used for training, testing and validation (including the name of all web crawlers used) and detail on the core elements of model training (such as training stages and methods of optimisation). The draft Code contains an Open Question inviting comments on how the Code should provide greater detail on the type of information records.
- Rules related to copyright: The Code contains proposals explaining how GPAI model providers may comply with their obligations under Article 53(1)(c) of the EU AI Act in relation to copyright laws, including drawing up and implementing a copyright policy covering the life-cycle of GPAI models.
- Taxonomy of systemic risks: Drawing from the elements of a taxonomy of systemic risks, the draft Code identifies certain types of systemic risk, such as (i) risks related to offensive cyber capabilities; (ii) risks enabling chemical, biological, radiological or nuclear weapons attacks; (iii) issues related to the inability to control powerful autonomous GPAI models; (iv) the automated use of AI for Research and Development; (v) the facilitation of large-scale persuasion and manipulation; and (vi) large-scale illegal discrimination of individuals, communities or societies. Stakeholders are invited to provide feedback on a number of Open Questions related to the taxonomy of systemic risks, including the relevant considerations or criteria to take into account when defining whether a risk is a systemic risk, and whether any of the identified risks should be prioritised for addition to the main taxonomy of systemic risks.
- Systemic Risks - Safety and Security Framework: Once GPAI models with systemic risks have been identified, the Code promises to detail the risk management policies GPAI model providers can adhere to in order to proactively assess and proportionately mitigate systemic risks. The draft Code includes suggestions for technical as well as governance risk mitigation, and lists approximately 35 Open Questions, which overall are broad in nature. For example, in relation to risk mitigation, the Code invites comments on what standards for cybersecurity should be applied to GPAI models with systemic risks, depending on the systemic risk indicators and tiers of severity.
Next Steps
Discussions on the draft Code between the Working Groups, stakeholders, EU Member State representatives and international observers will begin week commencing 18 November 2024, with three further rounds of drafting, and hence changes to this current first draft, to take place over the next five months. It is intended that this iterative process will end with a finalised and adopted GPAI Code of Practice in time for the deadline under the EU AI Act on 2 May 2025.