New AI Guidance: NIST Reveals First Version of AI Risk Management Framework

February 22, 2023

Reading Time : 1 min

The National Institute for Standards and Technology (NIST) recently unveiled the first version of its Artificial Intelligence Risk Management Framework (AI RMF 1.0, or “Framework”). This highly anticipated and detailed Framework is intended as a voluntary guide for designing, developing, using and evaluating AI-related products and services with trustworthiness considerations in mind. Organizations can make use of this Framework to better prepare for the unique and often unpredictable risks associated with AI systems. Although there are no legal requirements for implementation of the Framework, it will likely be used to assess reasonableness of AI technology, viewed in parallel with the Blueprint for an AI Bill of Rights in the U.S. (discussed here) and the European Union’s (EU) Artificial Intelligence Act (discussed here).

The Framework is divided into two parts, with part 1 describing the intended audience, explaining how organizations can best frame AI risk, and outlining what trustworthy AI systems look like.1 Trustworthy systems, according to the Framework, are characterized by valid, reliable, secure, accountable, transparent AI that is explainable and privacy enhanced, with any harmful biases managed. Part 2 of the Framework is the core of the guidance, laying out four categories of functions that organizations should adopt to address risks of AI systems.2 These functions are:

  • Govern – This function is about cultivating a risk management culture, including implementing appropriate structures, policies and processes to identify and manage AI risks. Risk management must be a priority for senior leadership, who set the tone for organizational culture, and for management, who align the technical aspects of AI risk management with organizational policies. The technical aspects of AI risk management must be aligned to organizational polices and operations. Unlike the other functions that are specific to certain parts of the AI lifecycle, this function applies to all stages of an organization’s AI risk management process.
  • Map – This function is intended to enhance an organization’s ability to identify AI risks and their broader contributing factors. After documenting the intended purposes and expectations of the AI system, the organization should weigh its benefits and risks compared to the status quo to individuals, communities and other organizations. Contextual information must be considered, along with the specific methods used to complete tasks that the AI system will support, and information on the system’s knowledge limits. The outcome of this function serves as the basis for the subsequent two functions.
  • Measure – This function uses the information identified in the Map function, employing quantitative, qualitative or mixed-method risk assessment methods and input of independent experts to analyze and benchmark AI risks and their impacts. AI systems should be analyzed for trustworthy characteristics, social impact and human-AI configurations. The outcome of this functions will serve as the basis for the Manage function.
  • Manage – This function involves allocating resources to the mapped and measured risks, on a basis defined by the Govern function. Identified risks must be managed to increase transparency and accountability, prioritizing higher-risk AI systems. After determining whether the AI system achieves its intended purpose, treatment of risks must be allocated based on their impact. The systems showing outcomes inconsistent with the intended purpose should be superseded, disengaged or deactivated. Organizations should continue to apply risk management over time as new and unforeseen methods, needs, risks or expectations emerge.

Comments on the Framework will be accepted until February 27, 2023, with an updated version set to launch in spring 2023.


1 In this Framework, “AI system” is defined as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments” designed to operate with varying levels of autonomy.

2 Dep’t of Com., NIST, Artificial Intelligence Risk Management Framework (January 26, 2023), available at https://www.nist.gov/itl/ai-risk-management-framework.

Share This Insight

Previous Entries

Data Dive

November 19, 2024

The European Union’s AI Office published the inaugural General-Purpose AI Code of Practice on November 14, 2024. The Code is intended to assist providers of AI models in their preparations for compliance with the forthcoming EU AI Act, to be enforced from August 2, 2025. The Code is designed to be both forward-thinking and globally applicable, addressing the areas of transparency, risk evaluation, technical safeguards and governance. While adherence to the Code is not mandatory, it is anticipated to serve as a means of demonstrating compliance with the obligations under the EU AI Act. Following a consultation period that garnered approximately 430 responses, the AI Office will be empowered to apply these rules, with penalties for nonconformity potentially reaching 3% of worldwide turnover or €15 million. Three additional iterations of the Code are anticipated to be produced within the coming five months.

...

Read More

Data Dive

November 15, 2024

On October 29, 2024, the DOJ issued a proposed rule prohibiting and restricting certain transactions that could allow persons from countries of concern, such as China, access to bulk sensitive personal data of U.S. citizens or to U.S. government-related data (regardless of volume).

...

Read More

Data Dive

October 17, 2024

During the course of any lending transaction, lenders will conduct a due diligence review of the borrower, including reviewing any relevant “know-your-customer” information.

...

Read More

Data Dive

September 17, 2024

Following the publication of the European Union’s Artificial Intelligence Act (AI Act or Act) on 12 July 2024, there are now a series of steps that various EU bodies need to take towards implementation. One of the first key steps is in relation to the establishment of codes of practice to “contribute to the proper application” of the AI Act.

...

Read More

Data Dive

August 6, 2024

On July 30, 2024, the Senate passed the Kids Online Safety and Privacy Act (S. 2073) via an overwhelmingly bipartisan vote of 91-3 shortly before departing for the August recess.

...

Read More

Data Dive

July 18, 2024

On 12 July 2024, the European Union Artificial Intelligence Act (AI Act or Act) was published in the Official Journal of the European Union (EU), marking the final step in the AI Act’s legislative journey. Its publication triggers the timeline for the entry into force of the myriad obligations under the AI Act, along with the deadlines we set out below. The requirement to ensure a sufficient level of AI literacy of staff dealing with the operation and use of AI systems will, for example, apply to all providers and deployers on 2 February 2025.

...

Read More

Data Dive

July 18, 2024

On June 18, 2024, the United States Securities and Exchange Commission (SEC) announced a settlement with R.R. Donnelley & Sons Company (RRD) for alleged internal control and disclosure failures following a ransomware attack in 2021. Without admitting or denying the SEC’s findings, the business communications and marketing services provider agreed to pay a civil penalty of over $2.1 million to settle charges alleging violations of Section 13(b)(2)(B) of the Securities Exchange Act of 1934 (Exchange Act) and Exchange Act Rule 13a-15(a).1

...

Read More

Data Dive

June 11, 2024

In May, the National Institute of Standards and Technology (NIST) issued updated recommendations for security controls for controlled unclassified information (CUI) that is processed, stored or transmitted by nonfederal organizations using nonfederal systems, (NIST Special Publication 800-171 (SP 800-171), Revision 3). These security requirements are “intended for use by federal agencies in contractual vehicles or other agreements that are established between those agencies and nonfederal organizations.”1 While these new controls are only applicable to nonfederal entities that agree to comply with the new issuance, Revision 3 signals the next phase of expected security for government contractors.

...

Read More

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.