The National Institute for Standards and Technology (NIST) recently unveiled the first version of its Artificial Intelligence Risk Management Framework (AI RMF 1.0, or “Framework”). This highly anticipated and detailed Framework is intended as a voluntary guide for designing, developing, using and evaluating AI-related products and services with trustworthiness considerations in mind. Organizations can make use of this Framework to better prepare for the unique and often unpredictable risks associated with AI systems. Although there are no legal requirements for implementation of the Framework, it will likely be used to assess reasonableness of AI technology, viewed in parallel with the Blueprint for an AI Bill of Rights in the U.S. (discussed here) and the European Union’s (EU) Artificial Intelligence Act (discussed here).
The Framework is divided into two parts, with part 1 describing the intended audience, explaining how organizations can best frame AI risk, and outlining what trustworthy AI systems look like.1 Trustworthy systems, according to the Framework, are characterized by valid, reliable, secure, accountable, transparent AI that is explainable and privacy enhanced, with any harmful biases managed. Part 2 of the Framework is the core of the guidance, laying out four categories of functions that organizations should adopt to address risks of AI systems.2 These functions are:
- Govern – This function is about cultivating a risk management culture, including implementing appropriate structures, policies and processes to identify and manage AI risks. Risk management must be a priority for senior leadership, who set the tone for organizational culture, and for management, who align the technical aspects of AI risk management with organizational policies. The technical aspects of AI risk management must be aligned to organizational polices and operations. Unlike the other functions that are specific to certain parts of the AI lifecycle, this function applies to all stages of an organization’s AI risk management process.
- Map – This function is intended to enhance an organization’s ability to identify AI risks and their broader contributing factors. After documenting the intended purposes and expectations of the AI system, the organization should weigh its benefits and risks compared to the status quo to individuals, communities and other organizations. Contextual information must be considered, along with the specific methods used to complete tasks that the AI system will support, and information on the system’s knowledge limits. The outcome of this function serves as the basis for the subsequent two functions.
- Measure – This function uses the information identified in the Map function, employing quantitative, qualitative or mixed-method risk assessment methods and input of independent experts to analyze and benchmark AI risks and their impacts. AI systems should be analyzed for trustworthy characteristics, social impact and human-AI configurations. The outcome of this functions will serve as the basis for the Manage function.
- Manage – This function involves allocating resources to the mapped and measured risks, on a basis defined by the Govern function. Identified risks must be managed to increase transparency and accountability, prioritizing higher-risk AI systems. After determining whether the AI system achieves its intended purpose, treatment of risks must be allocated based on their impact. The systems showing outcomes inconsistent with the intended purpose should be superseded, disengaged or deactivated. Organizations should continue to apply risk management over time as new and unforeseen methods, needs, risks or expectations emerge.
Comments on the Framework will be accepted until February 27, 2023, with an updated version set to launch in spring 2023.
1 In this Framework, “AI system” is defined as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments” designed to operate with varying levels of autonomy.
2 Dep’t of Com., NIST, Artificial Intelligence Risk Management Framework (January 26, 2023), available at https://www.nist.gov/itl/ai-risk-management-framework.
The Framework is divided into two parts, with part 1 describing the intended audience, explaining how organizations can best frame AI risk, and outlining what trustworthy AI systems look like.1 Trustworthy systems, according to the Framework, are characterized by valid, reliable, secure, accountable, transparent AI that is explainable and privacy enhanced, with any harmful biases managed. Part 2 of the Framework is the core of the guidance, laying out four categories of functions that organizations should adopt to address risks of AI systems.2 These functions are:
- Govern – This function is about cultivating a risk management culture, including implementing appropriate structures, policies and processes to identify and manage AI risks. Risk management must be a priority for senior leadership, who set the tone for organizational culture, and for management, who align the technical aspects of AI risk management with organizational policies. The technical aspects of AI risk management must be aligned to organizational polices and operations. Unlike the other functions that are specific to certain parts of the AI lifecycle, this function applies to all stages of an organization’s AI risk management process.
- Map – This function is intended to enhance an organization’s ability to identify AI risks and their broader contributing factors. After documenting the intended purposes and expectations of the AI system, the organization should weigh its benefits and risks compared to the status quo to individuals, communities and other organizations. Contextual information must be considered, along with the specific methods used to complete tasks that the AI system will support, and information on the system’s knowledge limits. The outcome of this function serves as the basis for the subsequent two functions.
- Measure – This function uses the information identified in the Map function, employing quantitative, qualitative or mixed-method risk assessment methods and input of independent experts to analyze and benchmark AI risks and their impacts. AI systems should be analyzed for trustworthy characteristics, social impact and human-AI configurations. The outcome of this functions will serve as the basis for the Manage function.
- Manage – This function involves allocating resources to the mapped and measured risks, on a basis defined by the Govern function. Identified risks must be managed to increase transparency and accountability, prioritizing higher-risk AI systems. After determining whether the AI system achieves its intended purpose, treatment of risks must be allocated based on their impact. The systems showing outcomes inconsistent with the intended purpose should be superseded, disengaged or deactivated. Organizations should continue to apply risk management over time as new and unforeseen methods, needs, risks or expectations emerge.
Comments on the Framework will be accepted until February 27, 2023, with an updated version set to launch in spring 2023.
1 In this Framework, “AI system” is defined as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments” designed to operate with varying levels of autonomy.
2 Dep’t of Com., NIST, Artificial Intelligence Risk Management Framework (January 26, 2023), available at https://www.nist.gov/itl/ai-risk-management-framework.
Previous Entries
Data Dive
March 3, 2025
expanding compliance obligations for online services that collect, use, or disclose personal information from children under 13.1 The amendments impose
new restrictions on targeted advertising, add data security requirements, refine parental consent mechanisms, and introduce additional compliance
measures....
Data Dive
February 21, 2025
China, access to bulk sensitive personal data of U.S. citizens or to U.S. government-related data (regardless of volume)....
Data Dive
January 22, 2025
the Nation's Cybersecurity (EO 14144). Building on previous efforts, including Executive Order 14028, this directive seeks to bolster cybersecurity across
federal systems, supply chains and critical infrastructure from adversarial nations, particularly from the People’s Republic of China (PRC)....
Data Dive
January 10, 2025
response to the recent California wildfires. This extension aims to afford stakeholders additional time to provide comprehensive and detailed feedback,
considering the significant challenges posed by the wildfires....
Data Dive
November 25, 2024
related to semiconductors, quantum computing or AI....
Data Dive
November 19, 2024
of AI models in their preparations for compliance with the forthcoming EU AI Act, to be enforced from August 2, 2025. The Code is designed to be both
forward-thinking and globally applicable, addressing the areas of transparency, risk evaluation, technical safeguards and governance. While adherence to the
Code is not mandatory, it is anticipated to serve as a means of demonstrating compliance with the obligations under the EU AI Act. Following a consultation
period that garnered approximately 430 responses, the AI Office will be empowered to apply these rules, with penalties for nonconformity potentially
reaching 3% of worldwide turnover or €15 million. Three additional iterations of the Code are anticipated to be produced within the coming five months....
Data Dive
November 15, 2024
as China, access to bulk sensitive personal data of U.S. citizens or to U.S. government-related data (regardless of volume)....
Data Dive
October 17, 2024
“know-your-customer” information....