EU Unveils First Draft of General-Purpose AI Code of Practice

November 19, 2024

Reading Time : 4 min

The European Union’s AI Office published the inaugural General-Purpose AI Code of Practice on November 14, 2024. The Code is intended to assist providers of AI models in their preparations for compliance with the forthcoming EU AI Act, to be enforced from August 2, 2025. The Code is designed to be both forward-thinking and globally applicable, addressing the areas of transparency, risk evaluation, technical safeguards and governance. While adherence to the Code is not mandatory, it is anticipated to serve as a means of demonstrating compliance with the obligations under the EU AI Act. Following a consultation period that garnered approximately 430 responses, the AI Office will be empowered to apply these rules, with penalties for nonconformity potentially reaching 3% of worldwide turnover or €15 million. Three additional iterations of the Code are anticipated to be produced within the coming five months.

The Chairs and Vice-Chairs of the four Working Groups describe this first draft as a “foundation for further refinement”. The draft contains a number of “Open Questions”, some of which we have highlighted below, on which the relevant stakeholders are invited to provide views and comments by November 28, 2024. The drafters in general have reportedly aimed at making the Code “a flexible, targeted, and effective framework”, and it seems that effort to achieve that has been put into the draft. Business should carefully watch whether the final version of the Code arrives at an appropriate balance between sufficient further concretisation of steps to be implemented by companies across the AI value chain (i.e. supply chain) and the proclaimed desire to maintain flexibility.  

Limited initial guidance, including on when an entity is a “provider” of GPAI following fine-tuning

The first draft of the Code does not include the granularity required to implement any operational changes yet, but it is nevertheless a welcome development, including because it has acknowledged where further guidance will be necessary.

For example, the Q&A accompanying the draft Code confirms that the AI Office intends to provide further clarifications on the specific circumstances in which a downstream entity that fine-tunes or modifies an existing GPAI model may become a “provider” of a new model, and hence subject to the extensive obligations on such providers under the EU AI Act. The AI Office acknowledges that this is “a difficult question with potentially large economic implications”. It further explains that in any case, if an entity becomes a “provider” following fine-tuning, its obligations as a provider will only be limited to the fine-tuning, for example by complementing the already existing technical documentation with information on the modifications. In that context, the drafters of the Code in general have also reportedly confirmed that companies that build AI applications using another provider’s large GPAI model as a foundation will likely not have to bear the entire regulatory burden.

In addition, the Q&A confirms that the Code will provide further detail on what the relevant obligations imply for different ways of releasing GPAI models, including open-sourcing, as the Code acknowledges “the positive impact that open-source models have had on the development of the AI safety ecosystem.

Key proposals in the draft Code: a rough sense of direction

The provisions of the draft Code adopt a high-level approach, with broad measures and (relatively) more specific sub-measures set out. The Code explains that there will be more detailed Key Performance Indicators (KPIs), which are to be set out in subsequent drafts.

Key proposals include:  

  • Transparency: The draft Code details the type of information records providers of GPAI models must keep (and be prepared to provide to the AI Office and/or to downstream providers on request) in order to comply with their transparency obligations under Articles 53(1)(a) and (b) of the EU AI Act. Examples of such records include an up-to-date Acceptable Use Policy, information on data used for training, testing and validation (including the name of all web crawlers used) and detail on the core elements of model training (such as training stages and methods of optimisation). The draft Code contains an Open Question inviting comments on how the Code should provide greater detail on the type of information records.
  • Rules related to copyright: The Code contains proposals explaining how GPAI model providers may comply with their obligations under Article 53(1)(c) of the EU AI Act in relation to copyright laws, including drawing up and implementing a copyright policy covering the life-cycle of GPAI models.
  • Taxonomy of systemic risks:Drawing from the elements of a taxonomy of systemic risks, the draft Code identifies certain types of systemic risk, such as (i) risks related to offensive cyber capabilities; (ii) risks enabling chemical, biological, radiological or nuclear weapons attacks; (iii) issues related to the inability to control powerful autonomous GPAI models; (iv) the automated use of AI for Research and Development; (v) the facilitation of large-scale persuasion and manipulation; and (vi) large-scale illegal discrimination of individuals, communities or societies. Stakeholders are invited to provide feedback on a number of Open Questions related to the taxonomy of systemic risks, including the relevant considerations or criteria to take into account when defining whether a risk is a systemic risk, and whether any of the identified risks should be prioritised for addition to the main taxonomy of systemic risks.
  • Systemic Risks - Safety and Security Framework:Once GPAI models with systemic risks have been identified, the Code promises to detail the risk management policies GPAI model providers can adhere to in order to proactively assess and proportionately mitigate systemic risks. The draft Code includes suggestions for technical as well as governance risk mitigation, and lists approximately 35 Open Questions, which overall are broad in nature. For example, in relation to risk mitigation, the Code invites comments on what standards for cybersecurity should be applied to GPAI models with systemic risks, depending on the systemic risk indicators and tiers of severity.

Next Steps

Discussions on the draft Code between the Working Groups, stakeholders, EU Member State representatives and international observers will begin week commencing 18 November 2024, with three further rounds of drafting, and hence changes to this current first draft, to take place over the next five months. It is intended that this iterative process will end with a finalised and adopted GPAI Code of Practice in time for the deadline under the EU AI Act on 2 May 2025.

Share This Insight

Previous Entries

Data Dive

November 19, 2024

The European Union’s AI Office published the inaugural General-Purpose AI Code of Practice on November 14, 2024. The Code is intended to assist providers of AI models in their preparations for compliance with the forthcoming EU AI Act, to be enforced from August 2, 2025. The Code is designed to be both forward-thinking and globally applicable, addressing the areas of transparency, risk evaluation, technical safeguards and governance. While adherence to the Code is not mandatory, it is anticipated to serve as a means of demonstrating compliance with the obligations under the EU AI Act. Following a consultation period that garnered approximately 430 responses, the AI Office will be empowered to apply these rules, with penalties for nonconformity potentially reaching 3% of worldwide turnover or €15 million. Three additional iterations of the Code are anticipated to be produced within the coming five months.

...

Read More

Data Dive

November 15, 2024

On October 29, 2024, the DOJ issued a proposed rule prohibiting and restricting certain transactions that could allow persons from countries of concern, such as China, access to bulk sensitive personal data of U.S. citizens or to U.S. government-related data (regardless of volume).

...

Read More

Data Dive

October 17, 2024

During the course of any lending transaction, lenders will conduct a due diligence review of the borrower, including reviewing any relevant “know-your-customer” information.

...

Read More

Data Dive

September 17, 2024

Following the publication of the European Union’s Artificial Intelligence Act (AI Act or Act) on 12 July 2024, there are now a series of steps that various EU bodies need to take towards implementation. One of the first key steps is in relation to the establishment of codes of practice to “contribute to the proper application” of the AI Act.

...

Read More

Data Dive

August 6, 2024

On July 30, 2024, the Senate passed the Kids Online Safety and Privacy Act (S. 2073) via an overwhelmingly bipartisan vote of 91-3 shortly before departing for the August recess.

...

Read More

Data Dive

July 18, 2024

On 12 July 2024, the European Union Artificial Intelligence Act (AI Act or Act) was published in the Official Journal of the European Union (EU), marking the final step in the AI Act’s legislative journey. Its publication triggers the timeline for the entry into force of the myriad obligations under the AI Act, along with the deadlines we set out below. The requirement to ensure a sufficient level of AI literacy of staff dealing with the operation and use of AI systems will, for example, apply to all providers and deployers on 2 February 2025.

...

Read More

Data Dive

July 18, 2024

On June 18, 2024, the United States Securities and Exchange Commission (SEC) announced a settlement with R.R. Donnelley & Sons Company (RRD) for alleged internal control and disclosure failures following a ransomware attack in 2021. Without admitting or denying the SEC’s findings, the business communications and marketing services provider agreed to pay a civil penalty of over $2.1 million to settle charges alleging violations of Section 13(b)(2)(B) of the Securities Exchange Act of 1934 (Exchange Act) and Exchange Act Rule 13a-15(a).1

...

Read More

Data Dive

June 11, 2024

In May, the National Institute of Standards and Technology (NIST) issued updated recommendations for security controls for controlled unclassified information (CUI) that is processed, stored or transmitted by nonfederal organizations using nonfederal systems, (NIST Special Publication 800-171 (SP 800-171), Revision 3). These security requirements are “intended for use by federal agencies in contractual vehicles or other agreements that are established between those agencies and nonfederal organizations.”1 While these new controls are only applicable to nonfederal entities that agree to comply with the new issuance, Revision 3 signals the next phase of expected security for government contractors.

...

Read More

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.