What the New AI EO Means for Contractors

Published: November 01, 2023

Federal Market AnalysisArtificial Intelligence/Machine LearningInformation TechnologyPolicy and Legislation

The White House released a long-awaited artificial intelligence (AI) executive order calling for a whole-of-government effort to deploy trustworthy AI technologies.

On Monday, the Biden Administration released the highly anticipated artificial intelligence (AI) executive order (EO), Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The EO builds on previous actions to achieve trustworthy use of AI including the Blueprint of an AI Bill of Rights, NIST AI Risk Management Framework, and Executive Order 13960.

The White House sets out to provide comprehensive guidance to facilitate and expand the use of AI in a safe and trustworthy manner and leverages federal agency authorities to establish AI guidance and initiatives to meet this goal. Federal CIO, Clare Martorana called the EO “the first AI guidance that is being put out by any government in the world.”

The EO is organized into eight main principles, each with steps and calls to action. Many governing and verification practices outlined throughout the EO are required to take effect within 90 days of the EO’s publish date. Development of safety and security standards, development and testing plans, and other guidance and companion resources and studies detailed in the EO are largely due within 180 to 270 days of publish. Given the federal government’s reliance on contractors for AI innovation, there are contractor implications for each of the EO’s principles. 

New Standards for AI Safety and Security

  • Require developers share safety test results and other critical information with the U.S. government.
  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
  • Protect against the risks of using AI to engineer dangerous biological materials.
  • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
  • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
  • Order the development of a National Security Memorandum that directs further actions on AI and security.

Contractor Implications: The EO represents the most AI governance taken thus far by the federal government, calling for additional federal oversight and technical standards for contractors, particularly during AI solutions development. For example, the EO invokes the Defense Production Act to require developers to share safety test results when developing models for national security-related purposes.  The comprehensive standards outlined by the EO also require transparency of AI software before public release to address AI threats to and within critical infrastructure.

The EO also prompts the Secretary of Commerce to report when a foreign national transacts with a provider to train large AI models that pose major cyber threats, and prohibits Infrastructure-as-a-Service providers from doing business with foreign resellers without notification to Commerce. For this and additional cloud technology impacts from the AI EO, see Cloud Technology in the New Executive Order on Artificial Intelligence.

Moreover, the AI EO calls for pilot projects to identify, develop, test, and apply AI capabilities to cyber programs to aid in identifying and remedying vulnerabilities in U.S. systems, networks, and software. For more information on the cyber-related provisions within the AI EO, see Cybersecurity Elements in the New Artificial Intelligence Executive Order.

Protecting Americans’ Privacy

  • Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques.
  • Strengthen privacy-preserving research and technologies.
  • Evaluate how agencies collect and use commercially available information.
  • Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques.

Contractor Implications: The emphasis on privacy-preserving technologies will lead to more investment in federal AI research and development (R&D) contracts and grants. For example, the AI EO requires the Secretary of Energy to fund the creation of a Research Coordination Network to advance privacy research to scale privacy-enhancing technologies (PETs).

The EO also calls on additional oversight of data sources, both used by contractors and procured from contractors. Additionally, guidelines to protect citizen data will lead to increased oversight and an added layer of data management, use, sharing and integration policies and activities behind evidence-based decision-making.

Advancing Equity and Civil Rights

  • Provide clear guidance to landlords, federal benefits programs, and federal contractors.
  • Address algorithmic discrimination.
  • Ensure fairness throughout the criminal justice system.

Contractor Implications: Ensuring the absence of AI bias will require more development time for AI contractors. Contractors may need to consider the diversity of the AI development workforce to increase awareness and ability to identify bias.

Additional training by AI experts for federal entities, especially at the Department of Justice, is needed to help identify existing bias patterns and inform criminal justice policy.

Standing Up for Consumers, Patients, and Students

  • Advance the responsible use of AI in healthcare.
  • Shape AI’s potential to transform education.

Contractor Implications: Prompts increased investment in AI solutions for healthcare, particularly solutions designed intentionally for specific healthcare data. For example, HHS is charged with establishing an HHS AI Task Force to develop policies, frameworks, and/or regulatory action on responsible deployment and use of AI in healthcare.

The EO will also drive additional network and communications technologies to protect consumers. For example, the EO requires the FCC to consider the potential effects of AI on networks and consumers, including an examination of AI to improve spectrum management and expand opportunities to share non-federal spectrum, as well as improve network security through emerging technologies that incorporate AI such as 6G and Open RAN.

Supporting Workers

  • Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers.
  • Produce a report on AI’s potential labor-market impacts and strengthen federal support for workers facing labor disruptions from AI.

Contractor Implications: Contractors will also need to develop principles and best practices and mechanisms to monitor AI-driven issues across their own organizations. The EO calls for studies and reports by the Department of Labor to assess AI impact on workers. Results of such studies could result in additional labor policies for agencies and contractors alike.

Promoting Innovation and Competition

  • Catalyze AI research across the United States.
  • Promote a fair, open, and competitive AI ecosystem.
  • Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay and work in the United States.

Contractor Implications: The AI EO encourages a significant increase in AI R&D investments and initiatives. Specifically, the EO requires the launch of a pilot program to implement the National AI Research Resource to integrate and make available AI resources and data for the boarder research community. Further, the EO calls for NSF to establish at least four new National AI Research Institutes, in addition to the 25 institutes currently funded.

The AI EO also charges the Department of Energy to take steps in expanding partnerships with industry, academia, and other agencies to utilize the department’s computing capabilities and AI testbeds to advance foundational models that support new applications in science, energy and national security.

To advance AI innovation in the healthcare sector, the AI EO calls on HHS to prioritize grantmaking and awards to support AI-enabled tools in areas such as developing personalized immune-response profiles for patients.

Additionally, the AI EO requires the VA to host two 3-month nationwide AI Tech Sprint competitions to improve quality of veterans’ healthcare. In fact, the VA announced the first competition a day after the release of the AI EO.  

Advancing American Leadership Abroad

  • Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI.
  • Accelerate development and implementation of vital AI standards.
  • Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges.

Contractor Implications: Agencies, in particular the Department of Commerce and State, may turn to contractor assistance and input in collaborating with global partners and establishing international frameworks to harness AI benefits and mitigate its risks.

Ensure Responsible and Effective Government Use of AI

  • Issue guidance for agencies’ use of AI.
  • Help agencies acquire specified AI products and services.
  • Accelerate the rapid hiring of AI professionals.

Contractor Implications: The EO calls for federal agencies to ramp up agency and cross-agency leadership in AI technologies, which will influence and increase federal AI acquisitions. For example, the EO calls on OMB to chair an interagency council to coordinate development and use of AI at federal agencies. The EO also calls for agencies to designate Chief Artificial Intelligence Officers to coordinate and promote the use of AI at respective agencies. The DOD’s Chief Digital and Artificial Intelligence Office (CDAO) may serve as an example for agencies to follow in implementing this requirement. 

The EO also provides provisions for increased federal AI investment. For instance, the EO calls on the Technology Management Fund to prioritize funding for AI projects for at least one year, particularly generative AI in service of mission delivery. The EO also charges GSA, OMB, and other agencies to take steps to “facilitate access to Federal Government-wide acquisition solutions for specified types of AI services and products, such as through the creation of a resource guide or other tools to assist the acquisition workforce.  Specified types of AI capabilities shall include generative AI and specialized computing infrastructure.”

Upcoming AI legislation and guidance

In the EO’s fact sheet, the Biden Administration promises to work with Congress to pursue additional legislation in responsible AI. The current Congress already has several pieces of AI-related legislation circling, with a recent bill proposing an AI bug bounty program, AI regulation in the financial services industry, and more. OMB also issued a draft of its AI memorandum to federal agencies. The purpose of the memorandum is to establish, "new agency requirements and guidance for AI governance, innovation, and risk management, including through specific minimum risk management practices for uses of AI that impact the rights and safety of the public." Public comment on the draft OMB guidance is due no later than December 5, 2023.