OMB Issues Draft AI Memo

Published: November 08, 2023

Artificial Intelligence/Machine LearningOMBPolicy and Legislation

Following the release of the White House AI executive order, OMB released draft guidance which contains new agency requirements that may both hinder and present opportunities for contractors.

On the heels of the Biden Administration’s release of a signature AI executive order (EO), the Office of Management and Budget (OMB) issued a proposed memorandum to agencies titled, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.  Public comments on the draft policy are due by December 5, 2023.

As the title suggests, the memo focuses on three areas of AI, establishing new agency guidance and requirements to advance AI innovation while minimizing impact of risk and “the rights and safety of the public,” according to OMB. The memo applies to all CFO Act agencies and to new and existing AI solutions, apart from some elements of the Intelligence Community and AI used as a component of national security systems.

Governance

In accordance with the AI EO, agencies must designate a Chief AI Officer (CAIO) within 60 days of issuance of the final memorandum. Agencies may choose to designate an existing official (i.e. CTO or CDO) as the CAIO, and be positioned enough to report to at least the Deputy Secretary or equivalent at the agency. The CAIO’s roles and responsibilities outlined by the memo include:

  • Serve as the senior advisor for AI to the head of the agency
  • Advise the CDO and CHCO on the resources and workforce skills necessary to apply AI to agency mission
  • Identify and promote use cases of AI to improve agency mission and advance equity
  • Remove barriers to responsible use of AI (i.e. advancing AI-enabling enterprise infrastructure)
  • Manage the agency’s AI risk program
  • Establish processes to measure and monitor AI application performance

The memo also calls on CFO Act agencies to convene AI governance boards among senior stakeholders to oversee the agency’s use of AI, remove barriers, and manage risk.

Innovation

The OMB memo also charges agencies with developing AI strategies within a year of the final memo’s issuance. Strategies should reflect, among others, the agency’s AI maturity goals, future AI investments, and a plan for developing capacities for AI needed to test and maintain the technology (i.e. data, computing, development, testing and cyber infrastructure).

Agencies are instructed to remove barriers to AI implementation while ensuring responsible use by creating the internal infrastructure to allow for AI development and deployment. Accordingly, the memo makes the following recommendations:

  • Ensure AI projects have access to the needed infrastructure (i.e. high-performance computers) and AI developers have access to the software tools, open-source libraries, and deployment and monitoring capabilities
  • Develop data infrastructure to process agency datasets used for testing and operating AI solutions. This includes maximizing appropriate access to internal, public and agency-shared datasets, and ensuring advanced data management practices
  • Update cybersecurity processes to address the needs of AI applications, including use of generative AI in Authorizations to Operation and any other oversight processes
  • Take full advantage of special hiring and retention authorities to fill AI talent gaps, this includes data scientists and engineers, designers, behavioral scientists, contracting officials, managers and attorneys contributing to agency AI use
  • Assess potential use cases of generative AI toward agency mission and establish safeguards and oversight for generative AI use 

Risk Management

The memo provides agencies until August 1, 2024 to ensure that the minimum risk management practices outlined in the document are followed for rights-impacting and safety-impacting AI. (Note - the proposed guidance provides a detailed definition of safety-impacting and rights-impacting AI) Sample practices include testing AI for performance in a real-world context, independently evaluating AI, providing adequate human training and assessment, and maintaining human consideration and remedy processes.

The memo also provides recommendations for managing risks in federal acquisitions of AI. Those recommendations include:

  • Obtain documentation regarding the use of model, data and system cards of procured AI
  • Regularly evaluate AI-performance claims made by federal contractors
  • Incentivize the continuous improvement of procured AI
  • Promote opportunities for competition among contractors
  • Treat relevant data as a critical asset for AI maturity, retaining sufficient rights to data to avoid vendor lock-in and to enable continued design and development of procured AI
  • Include tailored risk management requirements in contracts for generative AI (i.e. requiring testing and safeguards, and requiring proper labeling for content generation)

Thoughts for Contractors

Whereas agencies may have previously hesitated to implement and trust AI technologies, the recent AI EO and the OMB memo provide the most guidance, to date, to push agencies to pursue AI use and innovation for mission and operational improvements. However, both require additional regulation and oversight in AI development, testing and deployment. While regulations will put extra burdens on contractors, the EO and draft memo also provide additional AI-related contractor opportunities, especially as agencies are prompted to prioritize building IT environments to enable AI applications. Moreover, the structured AI governance called for by the EO and memo will streamline future investment in AI technologies at agencies. Nonetheless, only time will tell if the proper legislation and funding will be put in place to implement these far-reaching policies and support a whole-of-government era of AI.