Political Deal on the EU AI Act: A Milestone but the Journey Continues
A ground-breaking, first of its kind, sector agnostic, comprehensive law regulating Artificial Intelligence (AI) was agreed in the European Union on Friday, 8 December 2023, between the European Parliament, Commission and Council. This law, the “EU AI Act,” has extraterritorial scope as it will likely apply to entities which are placing AI systems on the market or putting them into service in the EU, regardless of where they are based or established. Though a political deal has been reached, the journey continues: the technical teams need to draft the provisions of the final EU AI Act so as to reflect the agreed deal; certain details still have to be fine-tuned; and a final draft will have to be produced, likely at the end of January / beginning of February 2024. The legislators maintain that the EU AI Act is future-proof (with built-in mechanisms to update) and achieves a balance between protecting fundamental rights and fostering innovation. These sentiments underlie the EU’s hope that the EU AI Act will be a leading model that galvanizes similar legislation and “global convergence” on AI regulation. By contrast, the United Kingdom at this stage is not pursuing a far-reaching, horizontal law on AI and is instead relying on existing laws and regulations.
The first draft of the EU AI Act was published by the Commission on 21 April 2021 (see our post here). Two and a half years later, and after reportedly more than 600 hours of heated negotiations, including a three-day marathon trilogue discussion last week, the EU announced late on 8 December 2023 that a deal on the draft law had been reached. Just in time as well, because with EU elections scheduled for June 2024, this was the last possible moment to achieve compromise if the draft EU AI Act was to be finalised and adopted in this Parliament.
In the run up to the final negotiations, a number of crucial sticking points emerged.
- First, as developers and users of “high-risk” AI systems will be subject to a raft of new restrictions, there was fierce debate on what constitutes a “high-risk” AI system. The description and list of such systems in the original draft law was very broad; parties on the right side of the political spectrum were generally pushing for a narrower list, arguing that many legitimate business practices, including for example in relation to fraud detection, would be wrongly classified as high-risk under the broader definition.
- A second contentious issue was in relation to systems deemed to pose unacceptable risk and therefore subject to an outright ban. The issue there was, similarly, a juxtaposition between a desire to protect fundamental rights (and therefore, for example, banning systems that use subliminal techniques that may harmfully distort people’s behaviour) and the aim to promote innovation.
- Increasing the competitiveness of the EU on the global AI scene was the core of a third vital sticking point, namely the regulation of foundation models or general purpose AI, including large language models (LLM). With France and Germany particularly interested in protecting the AI start-ups which have emerged in their jurisdictions, certain EU Member States were vehemently opposed to including any provisions regulating such models. The European Parliament, on the other hand, in the draft it produced in June 2023, included considerable proposals for regulating foundation models because of the exponential development and global spread of these models since the original drafting of the law in 2021.
- There were a number of other points of contention, including in relation to governance, enforcement and penalties. With such a wide range of seemingly uncompromisable matters, reaching a deal on the EU AI Act was certainly questionable.
Yet a deal was reached in the final hours of Friday 8 December 2023. The Parliament’s announcement confirmed that the EU AI Act “aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field”. The highlights of the agreed deal include:
- Wider ban for unacceptable risk systems: It appears that the list of banned AI systems has been enlarged from the original Commission draft and now includes (among other systems) biometric categorisation systems that use sensitive characteristics; untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases; social scoring based on social behaviour or personal characteristics; predictive policing; and emotion recognition in the workplace and educational institutions.
- Remote biometric identification systems (RBI): The use of RBI in publicly accessible spaces for law enforcement purposes will be subject to prior judicial authorisation and for strictly defined lists of crimes. Both “post-remote” and “real-time” RBI will only be used for specific purposes, with time and location limitations. It remains to be seen if the distinction between “verification” (a one-to-one system, i.e. when a person provides data that is compared against their stored biometric record) and “identification” (a one-to-many system, i.e. where biometric data of one person is compared with that of many), with less onerous obligations on the verification systems, will be retained in the final draft.
- Obligations on high-risk systems, in light of fundamental rights, environment and the rule of law: In relation to high-risk systems, it seems that the risk will be assessed against the systems’ “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law”. The new obligations on high-risk systems will include a mandatory fundamental rights impact assessment. EU citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.
- Foundation models / general purpose AI will be regulated: Foundation models or general purpose AI, including LLMs, will be subject to new obligations, in line with a two-tier approach. All such models will have to adhere to transparency requirements. The “lower tier” (i.e., less regulated) models will have to comply with strict obligations such as drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training. The “higher tier” (i.e., more regulated) models, deemed “high-impact” models with “systemic risk”, will be subject to additional, more stringent obligations, such as conducting model evaluations, assessing and mitigating systemic risks, conducting adversarial testing, reporting to the Commission on serious incidents, ensuring cybersecurity and reporting on their energy efficiency.
- Promoting innovation: In order to promote innovation, the law will envisage so-called regulatory sandboxes and real-world-testing, which will develop and train innovative AI before placement on the market.
- Penalties: The maximum penalties can reach 35 million euro (approx. US$37.6 million, £29.9 million) or 7% of global turnover.
- Timing: Once the law is finalised and adopted, as expected in Q1 2024, most obligations will become binding within 24 months, i.e. by early 2026. However, the ban on prohibited use cases will be binding sooner, within six months, and the obligations on foundation models /general purpose AI will also become binding earlier, within 12 months.
It remains to be seen at what cost the political deal was reached, in terms of clarity on the proposed new legal obligations on the various actors in the AI value chain. In the weeks ahead, when pen will be put to paper on the remaining provisions of the draft EU AI Act, businesses are encouraged to continue actively to engage in the process so they do not miss an opportunity to be heard on how the fine details of the Act are finalised. The Global Akin AI Group is available to discuss the draft EU AI Act, and other AI developments at your convenience.