On 12 July 2024, the European Union Artificial Intelligence Act (AI Act or Act) was published in the Official Journal of the European Union (EU), marking the final step in the AI Act’s legislative journey. Its publication triggers the timeline for the entry into force of the myriad obligations under the AI Act, along with the deadlines we set out below. The requirement to ensure a sufficient level of AI literacy of staff dealing with the operation and use of AI systems will, for example, apply to all providers and deployers on 2 February 2025.
Businesses should prioritise assessing what AI they deploy or develop, take measures to ensure AI literacy (i.e., skills, knowledge and understanding to make informed use of AI and awareness of its opportunities and risks) and assess if any of the AI systems will be prohibited and cease any such use and development. Next, companies should focus on putting in place a compliance programme, in collaboration with the relevant business teams, in relation to general-purpose AI models, as well as high-risk AI, such as AI systems used in recruitment, critical digital infrastructure, evaluating creditworthiness and biometrics and low-risk AI, such as chatbots. In some cases, compliance may require significant time and effort (for example, changes to high-risk AI models as to data sets and human oversight).
Our previous alert, following the Council of the EU issuing its final approval of the AI Act on 21 May 2024, sets out the mechanics of the AI Act in further detail. At a high level, the complex and involved AI Act imposes obligations in relation to all general-purpose AI models as well as regarding high-risk AI systems and low-risk AI systems, banning certain AI systems outright. The AI Act has an extra-territorial scope and applies to providers, deployers and a wide range of other participants in the AI value chain (i.e., supply chain). It has steep non-compliance penalties, with the maximum fine reaching 7% of global turnover or 35 million euro (approx. US$37.6 million, £29.9 million), whichever is higher. As detailed in Akin Intelligence, the AI Office, a new body under the EU Commission tasked with overseeing the implementation and enforcement of the AI Act, was set up in June. Another new body, the European AI Board, comprising representatives of EU Member States and aimed at ensuring consistent and effective application of the AI Act, also held its first meeting in June.
Users and developers of AI systems, including general-purpose AI models, should consider which parts of the Act apply to their operations and set up a compliance programme in order to meet their obligations in time in accordance with the relevant deadlines, as follows:
- 1 August 2024: The EU AI Act officially enters into force.
- 2 February 2025: All providers and deployers of AI systems need to ensure, to their best extent, a sufficient level of AI literacy of staff dealing with the operation and use of AI systems. Certain AI practices become prohibited, including certain biometric categorisation and identification systems; AI systems used to classify people by social behaviour or known, inferred or predicted personal or personality characteristics (e., ‘social scoring’), resulting in detrimental or unfavourable treatment; and AI systems that deploy subliminal techniques beyond a person’s consciousness, or exploit individuals’ vulnerabilities, with the objective or effect of materially distorting behaviour in a manner that causes or is reasonably likely to cause significant harm.
- 2 August 2025: The requirements for all general-purpose AI (GP AI) models become binding (with some limited exceptions), with stricter obligations for GP AI with “systemic risk”. If certain criteria are met, GP AI models are considered with “systemic risk”; there is a presumption that the criteria are met in some instances.
- 2 August 2026: The bulk of the remaining obligations on deployers and providers of AI become binding, including for low and high-risk and high low risk AI systems. High-risk AI systems are those that fall into eight categories under the Act, such as employment and recruitment, biometrics, access to essential services (such as evaluating creditworthiness) and critical infrastructure, including digital infrastructure. Those high-risk AI systems already on the market as of 2 August 2026 must comply only if, going forward, there are significant changes in their design. Low-risk AI systems include chatbots as well as tools generating synthetic audio, image, video or text content and deep fakes.
- 2 August 2027 (and beyond): Providers of GP AI which were already on the market as of 2 August 2025, must comply with obligations for GP AI. AI systems regulated by specific EU laws (g., vehicles, aviation, medical devices, lifts, machinery) become subject to the obligations for high-risk AI systems. By 2030, certain other AI systems, mainly in the public domain, must comply with all remaining relevant obligations.
The Global Akin AI Group is available to discuss the AI Act and its implications for you at your convenience.
The full and final text of the AI Act is available to view on the Official Journal’s website.