UK National AI Strategy Announced Following the AI Roadmap

Mar 26, 2021

Reading Time : 3 min

By: Natasha G. Kohne, Jenny Arlington, Jay Jamooji, Charlotte Ezaz (Trainee Solicitor)

Focusing on AI appears to be one of the priorities for the UK government. According to the Global AI Index developed by Tortoise Media (reportedly the first index to benchmark nations on their level of investment, innovation and implementation of AI), the UK is ranked third, after the United States and China. In 2020, UK firms that were adopting or creating AI-based technologies received £1.78 billion (approx. US$2.45 billion) in funding, according to the CEO of Tech Nation, a UK-government backed initiative assisting tech companies.

The AI Strategy likely seeks to further capitalise upon this potential, by using AI to create new jobs, improve productivity, tackle climate change and deliver better public services in a way that is both “globally ambitious and socially inclusive.”

The new AI Strategy will focus on:

  • Growth of the economy through widespread use of AI technologies.
  • Ethical, safe and trustworthy development of responsible AI.
  • Resilience in the face of change through an emphasis on skills, talent and research & development (R&D).1

The announcement and development of the AI Strategy takes up a recommendation by the UK AI Council (an expert committee advising the UK government on the AI ecosystem) set out in its AI Roadmap dated January 6, 2021, which called for the development of such a strategy.

That follows in the footsteps of the European Commission, who produced a White Paper on AI in February 2020 (see our post here). Where the European Union (EU) White Paper looks “towards an ecosystem for excellence and trust” in AI policy and regulation, so too does the UK’s AI Roadmap. These strategies follow on from the respective EU and UK drives towards strategising on Big Data (see our post on the UK’s National Data Strategy and the EU’s Digital Services Act package).

The UK AI Council has welcomed the UK government’s adoption of the AI Strategy, which the AI Roadmap describes as essential in prioritising and setting out a time frame for implementation of the Roadmap’s aims.

The aims of the AI Roadmap are twofold. First, the AI Roadmap states that it is necessary to “double-down” on recent investments that the UK has made in AI, in a call for continued funding of the area. The second principle underpinning the AI Roadmap advocates that support for AI should reflect the rapidity with which the science and technology in AI are developing, in order to be adaptable to disruption. The approach is one that seeks to ensure that the UK is at the forefront of integrating approaches to ethics, security and social impacts in the development of AI in coming decades. This is seen as a necessary step to foster “full confidence in AI across society.”

Accordingly, the AI Roadmap sets out 16 recommendations to help the UK government develop an AI Strategy, split into four pillars. One of the pillars focuses on “Data, Infrastructure and Public Trust”. In that context, the recommendations set out in the AI Roadmap are that  the UK should lead in developing appropriate standards to frame the future governance of data, and that it should also lead in finding ways to enable public scrutiny of automated decision-making and to ensure the public can trust AI.

The other pillars focus on “Research, Development and Innovation”; “Skills and Diversity”; and “National, Cross-sector Adoption”, in particular in the health care industry, climate change and defense and security.

It remains to be seen how the AI Roadmap’s recommendations will be reflected in the AI Strategy. We are following closely what further law and regulation on AI may be developed in that context.


1 The UK sees a boost in R&D investment through the government’s Research and Development Roadmap to reach 2.4 percent of gross domestic product (GDP) by 2027.

 

Share This Insight

Previous Entries

Data Dive

October 17, 2024

During the course of any lending transaction, lenders will conduct a due diligence review of the borrower, including reviewing any relevant “know-your-customer” information. In the context of a fund finance transaction, this due diligence is likely to include a review of fund organizational documents, subscription agreements and side letters, if any, from the fund’s investors. Providing this information to lenders is an essential and practical aspect of incurring any fund-level financing, and is often expressly permitted by a fund’s governing documentation. Especially in the context of a subscription credit facility, where investor commitments and the related right to collect capital contributions are the primary source of repayment for the loan, a lender will need to see information that could potentially include sensitive or confidential information about investors.

...

Read More

Data Dive

September 17, 2024

Following the publication of the European Union’s Artificial Intelligence Act (AI Act or Act) on 12 July 2024, there are now a series of steps that various EU bodies need to take towards implementation. One of the first key steps is in relation to the establishment of codes of practice to “contribute to the proper application” of the AI Act.

...

Read More

Data Dive

August 6, 2024

On July 30, 2024, the Senate passed the Kids Online Safety and Privacy Act (S. 2073) via an overwhelmingly bipartisan vote of 91-3 shortly before departing for the August recess.

...

Read More

Data Dive

July 18, 2024

On 12 July 2024, the European Union Artificial Intelligence Act (AI Act or Act) was published in the Official Journal of the European Union (EU), marking the final step in the AI Act’s legislative journey. Its publication triggers the timeline for the entry into force of the myriad obligations under the AI Act, along with the deadlines we set out below. The requirement to ensure a sufficient level of AI literacy of staff dealing with the operation and use of AI systems will, for example, apply to all providers and deployers on 2 February 2025.

...

Read More

Data Dive

July 18, 2024

On June 18, 2024, the United States Securities and Exchange Commission (SEC) announced a settlement with R.R. Donnelley & Sons Company (RRD) for alleged internal control and disclosure failures following a ransomware attack in 2021. Without admitting or denying the SEC’s findings, the business communications and marketing services provider agreed to pay a civil penalty of over $2.1 million to settle charges alleging violations of Section 13(b)(2)(B) of the Securities Exchange Act of 1934 (Exchange Act) and Exchange Act Rule 13a-15(a).1

...

Read More

Data Dive

June 11, 2024

In May, the National Institute of Standards and Technology (NIST) issued updated recommendations for security controls for controlled unclassified information (CUI) that is processed, stored or transmitted by nonfederal organizations using nonfederal systems, (NIST Special Publication 800-171 (SP 800-171), Revision 3). These security requirements are “intended for use by federal agencies in contractual vehicles or other agreements that are established between those agencies and nonfederal organizations.”1 While these new controls are only applicable to nonfederal entities that agree to comply with the new issuance, Revision 3 signals the next phase of expected security for government contractors.

...

Read More

Data Dive

May 31, 2024

On May 21, 2024, the European Union finalized the adoption of the groundbreaking EU Artificial Intelligence Act, a comprehensive and sector-agnostic legislation that extends globally. This 420-page Act aims to regulate the deployment and development of AI systems, categorizing them into high-risk and low-risk, and even banning certain types of AI. The Act emphasizes trust, transparency, and accountability in AI usage, promoting the safe integration of AI technologies. This legislation sets a potential global benchmark for AI regulation, although its complexity may pose interpretative and implementation challenges for stakeholders. We set out the key provisions below.

...

Read More

Data Dive

May 30, 2024

On May 17, 2024, Colorado Governor Jared Polis signed into law S.B. 205, a pioneering piece of legislation aimed at regulating high-risk AI systems. This new law, set to take effect on February 1, 2026, introduces stringent requirements for AI developers and deployers, focusing on risk management and the prevention of algorithmic discrimination. This legislation marks a significant step in state-level AI regulation, potentially setting a precedent similar to the impact of GDPR on privacy laws.

...

Read More

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.