New Executive Order on Artificial Intelligence Offers Insight into Potential Requirements
Published: December 10, 2020
The White House published a new executive order on artificial intelligence on December 3, 2020.
- The EO establishes 9 principles that AI capabilities used by agencies must conform to. National security and defense solutions are exempt.
- AI capabilities currently in use must conform to the new set of principles in the EO, meaning engineering work may be required.
- Agencies that do not have national security/defense missions must publish an inventory of the AI capabilities they are using by late Spring 2021.
- Inventories will likely provide insight to industry what AI capabilities are currently being used and which areas of agency work might be able to benefit from using AI.
Lost amid the ongoing controversy surrounding the presidential election and passage of the Fiscal 2021 National Defense Authorization Act comes news that the White House recently published an “Executive Order on Promoting the Use of Trustworthy Artificial Intelligence (AI) in the Federal Government.” This order continues the veritable flood of policy documents and legislation surrounding the use of AI that we have seen in 2020 as Congress and the White House come to terms with the potentially game-changing capabilities provided by the technology.
The order recognizes that U.S. federal “agencies are already leading the way in the use of AI by applying it to accelerate regulatory reform; review solicitations for regulatory compliance; combat fraud, waste, and abuse committed against taxpayers; identify information security threats and assess trends in related illicit activities; enhance the security and interoperability of government information systems; facilitate review of large datasets; streamline processes for grant applications; model weather patterns; and to facilitate predictive maintenance.” The order encourages agencies to continue finding new use cases for AI, while noting at the same time that only those agencies dealing with national defense and security have created guidelines governing the technology. For agencies that have not created guidelines, generally those not handling national security or defense, the new executive order “establishes additional Principles for the use of AI,” and provides a common process for developing AI policy guidance.
These principles include:
- Respecting U.S. Values. Agencies shall design, develop, acquire, and use AI in a manner that exhibits due respect for our nation’s values and is consistent with the Constitution and all other applicable laws and policies, including those addressing privacy, civil rights, and civil liberties.
- Purpose Driven. Agencies shall seek opportunities for designing, developing, acquiring, and using AI, where the benefits of doing so significantly outweigh the risks, and the risks can be assessed and managed.
- Ensuring Accuracy and Reliability. Agencies shall ensure that their application of AI is consistent with the use cases for which that AI was trained, and such use is accurate, reliable, and effective.
- Ensuring Safety and Resilience. Agencies shall ensure the safety, security, and resiliency of their AI applications, including resilience when confronted with systematic vulnerabilities, adversarial manipulation, and other malicious exploitation.
- Understandability. Agencies shall ensure that the operations and outcomes of their AI applications are sufficiently understandable by subject matter experts, users, and others, as appropriate.
- Humans in the Loop. Agencies shall ensure that human roles and responsibilities are clearly defined, understood, and appropriately assigned for the design, development, acquisition, and use of AI.
- Regular Monitoring. Agencies shall ensure that their AI applications are regularly tested against these Principles. Mechanisms should be maintained to supersede, disengage, or deactivate existing applications of AI that demonstrate performance or outcomes that are inconsistent with their intended use or this order.
- Transparency. Agencies shall be transparent in disclosing relevant information regarding their use of AI to appropriate stakeholders, including the Congress and the public, to the extent practicable and in accordance with applicable laws and policies.
- Accountability. Agencies shall be accountable for implementing and enforcing appropriate safeguards for the proper use and functioning of their applications of AI, and shall monitor, audit, and document compliance with those safeguards. Agencies shall provide appropriate training to all agency personnel responsible for the design, development, acquisition, and use of AI.
Implementing these principles means that the Office of Management and Budget (OMB) must publish a roadmap for agency policy guidance by early March 2021 while by February 2021 the Federal Chief Information Officers Council must “identify, provide guidance on, and make publicly available the criteria, format, and mechanisms for agency inventories of non-classified and non-sensitive use cases of AI by agencies.” Concerning use cases, “each agency shall prepare an inventory of its non-classified and non-sensitive use cases of AI … including current and planned uses, consistent with the agency’s mission.”
The inventories are to be shared among agencies to provide examples for re-use and they will be made public in late Spring 2021. Each current use of AI also must be made to conform with the principles outlined in the executive order, meaning there may be a need for engineering work on AI capabilities already being used. The order does not clarify if Robotic Process Automation (RPA), which many agencies currently use, is considered AI for these purposes. Lastly, AI capabilities being used for national security, defense, and in R&D programs are not formally subject to these rules, although research is expected to adhere with them.
Implications For Industry
Visibility. Once agency AI inventories begin appearing industry will be able to identify where agencies are currently using the capability and where they are not. This will help inform productive conversations about where industry partners may be able to help.
Growth in the Civilian Sector. Given that agencies with national security and defense missions are not included under the EO, the information coming out about AI use will concern civilian agencies. This suggests there will be a foundation laid for a potential explosion of investment in AI by the civilian sector. Industry partners may want to shift business development resources accordingly.
Forthcoming Requirements. The principles outlined in the EO can easily be converted into contract requirements and should appear in procurements of AI capabilities. Industry partners offering AI capabilities should take note of them and ensure their solutions conform. Not doing so could mean the difference between winning or losing business.
Concerns about the possibility that a new presidential administration might reverse the current administration’s course on AI investment are likely misplaced. Investing in emerging technologies such as AI will be a priority for federal agencies regardless of who ends up occupying the White House, and whereas a new administration will certainly reverse many of the previous administration’s executive orders, as happened in 2017, this executive order will undoubtedly remain in effect.