Joint Statement on the Creation of the Global Partnership on Artificial Intelligence

Jul 14, 2020

Reading Time : 1 min

As announced, GPAI is an international partnership that will aim to promote the responsible development and use of Artificial Intelligence (AI) in a “human-centric” manner. This means developing and deploying AI in a way that is consistent with human rights, fundamental freedoms and shared democratic values. GPAI’s aim is “to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.”

The values which GPAI endorses reflect the core AI principles as promoted by the Organisation for Economic Co-operation and Development (OECD) in the May 2019 OECD Council Recommendation on AI. The OECD will be the host of GPAI’s Secretariat in Paris, and GPAI will draw upon the OECD’s international AI policy leadership. It is thought that this integration will strengthen the evidence base for policy aimed at responsible AI. In addition, GPAI has stated that it is looking forward to working with other interested countries and partners.

Centres of Expertise in Montreal and Paris will provide research and administrative support to GPAI, while the GPAI Secretariat will lend support to GPAI’s governing bodies, consisting of a council and steering committee. GPAI will engage in scientific and technical work and analysis, bringing together experts within academia, industry and government to collaborate across the following four initial working groups:

  1. Responsible AI
  2. Data governance
  3. The future of work
  4. Innovation and commercialization.

The outlook of these working groups appears to reflect GPAI’s recognition of the potential for AI to act as a catalyst for sustainable economic growth and development, providing that it can be done in an accountable, transparent and responsible manner.

GPAI’s short term priority, however, is to investigate how AI can be used to help with the response to, and recovery from, COVID-19.

The first annual GPAI Multistakeholder Experts Group Plenary is planned to take place in December 2020.

The creation of GPAI is an exciting new step in the global effort to harvest the possibilities which AI offers in an ethical and responsible way, minimizing the risks to individuals’ rights and freedoms. We will be monitoring its progress.

Share This Insight

Previous Entries

Data Dive

October 17, 2024

During the course of any lending transaction, lenders will conduct a due diligence review of the borrower, including reviewing any relevant “know-your-customer” information. In the context of a fund finance transaction, this due diligence is likely to include a review of fund organizational documents, subscription agreements and side letters, if any, from the fund’s investors. Providing this information to lenders is an essential and practical aspect of incurring any fund-level financing, and is often expressly permitted by a fund’s governing documentation. Especially in the context of a subscription credit facility, where investor commitments and the related right to collect capital contributions are the primary source of repayment for the loan, a lender will need to see information that could potentially include sensitive or confidential information about investors.

...

Read More

Data Dive

September 17, 2024

Following the publication of the European Union’s Artificial Intelligence Act (AI Act or Act) on 12 July 2024, there are now a series of steps that various EU bodies need to take towards implementation. One of the first key steps is in relation to the establishment of codes of practice to “contribute to the proper application” of the AI Act.

...

Read More

Data Dive

August 6, 2024

On July 30, 2024, the Senate passed the Kids Online Safety and Privacy Act (S. 2073) via an overwhelmingly bipartisan vote of 91-3 shortly before departing for the August recess.

...

Read More

Data Dive

July 18, 2024

On 12 July 2024, the European Union Artificial Intelligence Act (AI Act or Act) was published in the Official Journal of the European Union (EU), marking the final step in the AI Act’s legislative journey. Its publication triggers the timeline for the entry into force of the myriad obligations under the AI Act, along with the deadlines we set out below. The requirement to ensure a sufficient level of AI literacy of staff dealing with the operation and use of AI systems will, for example, apply to all providers and deployers on 2 February 2025.

...

Read More

Data Dive

July 18, 2024

On June 18, 2024, the United States Securities and Exchange Commission (SEC) announced a settlement with R.R. Donnelley & Sons Company (RRD) for alleged internal control and disclosure failures following a ransomware attack in 2021. Without admitting or denying the SEC’s findings, the business communications and marketing services provider agreed to pay a civil penalty of over $2.1 million to settle charges alleging violations of Section 13(b)(2)(B) of the Securities Exchange Act of 1934 (Exchange Act) and Exchange Act Rule 13a-15(a).1

...

Read More

Data Dive

June 11, 2024

In May, the National Institute of Standards and Technology (NIST) issued updated recommendations for security controls for controlled unclassified information (CUI) that is processed, stored or transmitted by nonfederal organizations using nonfederal systems, (NIST Special Publication 800-171 (SP 800-171), Revision 3). These security requirements are “intended for use by federal agencies in contractual vehicles or other agreements that are established between those agencies and nonfederal organizations.”1 While these new controls are only applicable to nonfederal entities that agree to comply with the new issuance, Revision 3 signals the next phase of expected security for government contractors.

...

Read More

Data Dive

May 31, 2024

On May 21, 2024, the European Union finalized the adoption of the groundbreaking EU Artificial Intelligence Act, a comprehensive and sector-agnostic legislation that extends globally. This 420-page Act aims to regulate the deployment and development of AI systems, categorizing them into high-risk and low-risk, and even banning certain types of AI. The Act emphasizes trust, transparency, and accountability in AI usage, promoting the safe integration of AI technologies. This legislation sets a potential global benchmark for AI regulation, although its complexity may pose interpretative and implementation challenges for stakeholders. We set out the key provisions below.

...

Read More

Data Dive

May 30, 2024

On May 17, 2024, Colorado Governor Jared Polis signed into law S.B. 205, a pioneering piece of legislation aimed at regulating high-risk AI systems. This new law, set to take effect on February 1, 2026, introduces stringent requirements for AI developers and deployers, focusing on risk management and the prevention of algorithmic discrimination. This legislation marks a significant step in state-level AI regulation, potentially setting a precedent similar to the impact of GDPR on privacy laws.

...

Read More

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.