NIST Releases Final AI Risk Management Framework

Published: January 31, 2023

Federal Market AnalysisArtificial Intelligence/Machine LearningInformation TechnologyPolicy and Legislation

The finalized AI Risk Management Framework provides a roadmap to addressing AI risk and improving trustworthiness and reliability of the transformative technology.

Last Thursday, the National Institute for Standards and Technology (NIST) released the final version of the highly anticipated AI Risk Management Framework (RMF). The development of the framework is the result of a requirement set forth by the National AI Initiative Act of 2020.

While AI introduces significant possibilities across various sectors, it also poses significant risk to individuals and organizations. Therefore, the “AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. The Framework is intended to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic, providing flexibility to organizations of all sizes and in all sectors and throughout society to implement the approaches in the Framework,” according to the newly released document.

Simply put, the framework aims to increase the trustworthiness of AI systems. The AI RMF defines an AI system as, “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”

The AI RMF is organized into two parts. Part 1 identifies the harms of AI to individuals, organizations, and an ecosystem. For example, AI runs the risk of interfering with a person’s civil liberties, rights, safety, or economic opportunities. Additionally, AI may harm an organization’s business operations and/or reputation, as well as affect global financial systems, supply chain, or interrelated systems.

Given the identified risks, the framework describes the balance of characteristics that AI systems must possess, including valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.   

Part II of the framework outlines four specific functions to aid organizations in mitigating risks of AI systems already in practice. Note that the framework document provides a set of categories and subcategories for each of the four functions below.  

Though the AI RMF is non-binding, NIST has expressed hope that the framework will change the culture and perspective of how to design, measure and monitor AI risks across private and public sectors. Specifically, NIST included perspectives from a range of AI experts in industry, academia and government in support of the framework. Also included in the framework’s release is the AI RMF Playbook, intended to be a companion resource for the AI RMF by suggesting how to implement considerations from the framework. As the framework and playbook are living documents, NIST is accepting feedback on either one by the end of February 2023 to be incorporated in an updated version of the playbook, expected by Spring 2023.