The National Institute for Standards and Technology (NIST) recently unveiled the first version of its Artificial Intelligence Risk Management Framework (AI RMF 1.0, or “Framework”). This highly anticipated and detailed Framework is intended as a voluntary guide for designing, developing, using and evaluating AI-related products and services with trustworthiness considerations in mind. Organizations can make use of this Framework to better prepare for the unique and often unpredictable risks associated with AI systems. Although there are no legal requirements for implementation of the Framework, it will likely be used to assess reasonableness of AI technology, viewed in parallel with the Blueprint for an AI Bill of Rights in the U.S. (discussed here) and the European Union’s (EU) Artificial Intelligence Act (discussed here).
The Framework is divided into two parts, with part 1 describing the intended audience, explaining how organizations can best frame AI risk, and outlining what trustworthy AI systems look like.1 Trustworthy systems, according to the Framework, are characterized by valid, reliable, secure, accountable, transparent AI that is explainable and privacy enhanced, with any harmful biases managed. Part 2 of the Framework is the core of the guidance, laying out four categories of functions that organizations should adopt to address risks of AI systems.2 These functions are:
- Govern – This function is about cultivating a risk management culture, including implementing appropriate structures, policies and processes to identify and manage AI risks. Risk management must be a priority for senior leadership, who set the tone for organizational culture, and for management, who align the technical aspects of AI risk management with organizational policies. The technical aspects of AI risk management must be aligned to organizational polices and operations. Unlike the other functions that are specific to certain parts of the AI lifecycle, this function applies to all stages of an organization’s AI risk management process.
- Map – This function is intended to enhance an organization’s ability to identify AI risks and their broader contributing factors. After documenting the intended purposes and expectations of the AI system, the organization should weigh its benefits and risks compared to the status quo to individuals, communities and other organizations. Contextual information must be considered, along with the specific methods used to complete tasks that the AI system will support, and information on the system’s knowledge limits. The outcome of this function serves as the basis for the subsequent two functions.
- Measure – This function uses the information identified in the Map function, employing quantitative, qualitative or mixed-method risk assessment methods and input of independent experts to analyze and benchmark AI risks and their impacts. AI systems should be analyzed for trustworthy characteristics, social impact and human-AI configurations. The outcome of this functions will serve as the basis for the Manage function.
- Manage – This function involves allocating resources to the mapped and measured risks, on a basis defined by the Govern function. Identified risks must be managed to increase transparency and accountability, prioritizing higher-risk AI systems. After determining whether the AI system achieves its intended purpose, treatment of risks must be allocated based on their impact. The systems showing outcomes inconsistent with the intended purpose should be superseded, disengaged or deactivated. Organizations should continue to apply risk management over time as new and unforeseen methods, needs, risks or expectations emerge.
Comments on the Framework will be accepted until February 27, 2023, with an updated version set to launch in spring 2023.
1 In this Framework, “AI system” is defined as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments” designed to operate with varying levels of autonomy.
2 Dep’t of Com., NIST, Artificial Intelligence Risk Management Framework (January 26, 2023), available at https://www.nist.gov/itl/ai-risk-management-framework.
The Framework is divided into two parts, with part 1 describing the intended audience, explaining how organizations can best frame AI risk, and outlining what trustworthy AI systems look like.1 Trustworthy systems, according to the Framework, are characterized by valid, reliable, secure, accountable, transparent AI that is explainable and privacy enhanced, with any harmful biases managed. Part 2 of the Framework is the core of the guidance, laying out four categories of functions that organizations should adopt to address risks of AI systems.2 These functions are:
- Govern – This function is about cultivating a risk management culture, including implementing appropriate structures, policies and processes to identify and manage AI risks. Risk management must be a priority for senior leadership, who set the tone for organizational culture, and for management, who align the technical aspects of AI risk management with organizational policies. The technical aspects of AI risk management must be aligned to organizational polices and operations. Unlike the other functions that are specific to certain parts of the AI lifecycle, this function applies to all stages of an organization’s AI risk management process.
- Map – This function is intended to enhance an organization’s ability to identify AI risks and their broader contributing factors. After documenting the intended purposes and expectations of the AI system, the organization should weigh its benefits and risks compared to the status quo to individuals, communities and other organizations. Contextual information must be considered, along with the specific methods used to complete tasks that the AI system will support, and information on the system’s knowledge limits. The outcome of this function serves as the basis for the subsequent two functions.
- Measure – This function uses the information identified in the Map function, employing quantitative, qualitative or mixed-method risk assessment methods and input of independent experts to analyze and benchmark AI risks and their impacts. AI systems should be analyzed for trustworthy characteristics, social impact and human-AI configurations. The outcome of this functions will serve as the basis for the Manage function.
- Manage – This function involves allocating resources to the mapped and measured risks, on a basis defined by the Govern function. Identified risks must be managed to increase transparency and accountability, prioritizing higher-risk AI systems. After determining whether the AI system achieves its intended purpose, treatment of risks must be allocated based on their impact. The systems showing outcomes inconsistent with the intended purpose should be superseded, disengaged or deactivated. Organizations should continue to apply risk management over time as new and unforeseen methods, needs, risks or expectations emerge.
Comments on the Framework will be accepted until February 27, 2023, with an updated version set to launch in spring 2023.
1 In this Framework, “AI system” is defined as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments” designed to operate with varying levels of autonomy.
2 Dep’t of Com., NIST, Artificial Intelligence Risk Management Framework (January 26, 2023), available at https://www.nist.gov/itl/ai-risk-management-framework.
Previous Entries
Data Dive
February 21, 2025
The Department of Justice’s (DOJ) final rule implements President Biden’s Executive Order 14117 of February 28, 2024, on “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern” (EO) and is intended to address a perceived gap in existing national security authorities to adequately address threats posed by the continuing effort of certain countries of concern to access Americans’ sensitive personal data and U.S. government-related data. (For additional information, please see our prior alerts on the proposed rule and issuance of the EO and DOJ’s accompanying advance notice of proposed rulemaking). This new and very complex regulatory regime reflects the U.S. government’s growing national security concerns about China and other adversarial governments obtaining access to Americans’ sensitive personal data through sales and licensing agreements, as well as certain vendor, employment and investment transactions, and that such agreements and transactions could enable these countries to use biometric, financial, ‘omic, geolocation or health data or other personal identifiers to engage in malicious cyber-enabled activities, espionage, tracking of military and national security personnel, blackmail or other nefarious activities.
Data Dive
January 22, 2025
On January 17, 2025, days before the inauguration, former President Joe Biden issued an executive order titled Strengthening and Promoting Innovation in the Nation's Cybersecurity (EO 14144). Building on previous efforts, including Executive Order 14028, this directive seeks to bolster cybersecurity across federal systems, supply chains and critical infrastructure from adversarial nations, particularly from the People’s Republic of China (PRC).
Data Dive
January 10, 2025
UPDATE: The California Privacy Protection Agency (CPPA) has extended the deadline for submitting public comments from January 14 to February 19, 2025, in response to the recent California wildfires. This extension aims to afford stakeholders additional time to provide comprehensive and detailed feedback, considering the significant challenges posed by the wildfires.
Data Dive
November 25, 2024
Treasury has issued a Final Rule to implement President Biden’s 2023 EO targeting U.S. investments in Chinese companies engaged in certain activities related to semiconductors, quantum computing or AI.
Data Dive
November 19, 2024
The European Union’s AI Office published the inaugural General-Purpose AI Code of Practice on November 14, 2024. The Code is intended to assist providers of AI models in their preparations for compliance with the forthcoming EU AI Act, to be enforced from August 2, 2025. The Code is designed to be both forward-thinking and globally applicable, addressing the areas of transparency, risk evaluation, technical safeguards and governance. While adherence to the Code is not mandatory, it is anticipated to serve as a means of demonstrating compliance with the obligations under the EU AI Act. Following a consultation period that garnered approximately 430 responses, the AI Office will be empowered to apply these rules, with penalties for nonconformity potentially reaching 3% of worldwide turnover or €15 million. Three additional iterations of the Code are anticipated to be produced within the coming five months.
Data Dive
November 15, 2024
On October 29, 2024, the DOJ issued a proposed rule prohibiting and restricting certain transactions that could allow persons from countries of concern, such as China, access to bulk sensitive personal data of U.S. citizens or to U.S. government-related data (regardless of volume).
Data Dive
October 17, 2024
During the course of any lending transaction, lenders will conduct a due diligence review of the borrower, including reviewing any relevant “know-your-customer” information.
Data Dive
September 17, 2024
Following the publication of the European Union’s Artificial Intelligence Act (AI Act or Act) on 12 July 2024, there are now a series of steps that various EU bodies need to take towards implementation. One of the first key steps is in relation to the establishment of codes of practice to “contribute to the proper application” of the AI Act.