NIST Issues Second Draft of the AI Risk Management Framework

Published: August 26, 2022

Federal Market AnalysisArtificial Intelligence/Machine LearningInformation TechnologyNIST

NIST released a second draft of the AI Risk Management Framework (RMF) alongside an AI RMF Playbook featuring suggested actions and references in achieving some of the functions within the RMF.

The benefits of artificial intelligence (AI) are endless, with the potential transform national security, transportation, finance, and law enforcement sectors within the federal government. Nonetheless, AI relies on data input and algorithms, which run the risk of unintended results from the technology should flaws exist in the input. As a result, NIST is working on an AI Risk Management Framework (RMF) to “address challenges unique to AI systems and encourage and equip different AI stakeholders to manage AI risks proactively and purposefully,” according to the standards agency.

To develop the framework, NIST has been solicitating stakeholder feedback within the past year through workshops and an initial draft of a planned Risk Management Framework (RMF). Recently, the agency released a second draft of the RMF and is anticipated to host a third workshop in October. The second draft incorporates additional feedback from the initial draft as well as details on risk thresholds and AI characteristics and classifications.

The draft RMF is largely comprised of two parts. Part 1 is dedicated to explaining trustworthy and responsible AI as well as defining the challenges, risks and impacts of the technology. Part 2 focuses on the core components of the framework, outlining the following four functions necessary for AI risk management:

In conjunction with the release of the second draft, NIST issued the draft AI RMF Playbook, intended to be a companion resource for the AI RMF. The playbook provides recommended actions, references, and guidance for stakeholders to achieve the results of the above RMF functions. The initial draft of the playbook provides actions items for “map” and “govern” with materials for “measure” and “manage” expected at a later date:

GOVERN

  1. Policies, processes and practices related to managing AI are in place across the organization and implemented immediately.
  2. Accountability structures are in place so that end users are responsible and trained to manage AI risk.
  3. Workforce diversity, equity, inclusion and accessibility processes are prioritized in managing AI.
  4. Organizational teams are committed to a culture that considers AI risk.
  5. Put in place processing for robust stakeholder engagement.
  6. Implement clear policies and procedures to address AI risk from third-party software and data and other supply chain.

MAP

  1. Establish and understand context.
  2. Perform AI system classification.
  3. Understand AI capabilities, usage, goals and cost/benefits.
  4. Map risk and benefits for third-party software and data.
  5. Assess impacts to individuals, groups, communities, organizations, etc.

NIST is accepting responses to the second draft AI RMF and corresponding playbook until September 29, 2022. The final versions of the framework and playbook are expected to publish in January 2023.

For more analysis on the policies and guidance shaping agency AI initiative and investment, refer to Deltek's Federal Artificial Intelligence Landscape, 2023 report.