Cybersecurity Elements in the New Artificial Intelligence Executive Order
Published: November 02, 2023
Federal Market AnalysisArtificial Intelligence/Machine LearningCritical Infrastructure ProtectionCybersecurityInformation TechnologyOMBPolicy and Legislation
The latest directive from the White House includes several provisions aimed at both securing AI and leveraging AI for cybersecurity capabilities.
The White House recently released an executive order (EO) focused on Artificial Intelligence. Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence seeks to build on previous federal actions to advance the development of trustworthy AI.
One of the governing principles in this AI EO is that “Artificial Intelligence must be safe and secure.” This requires “addressing AI systems’ most pressing security risks — including with respect to biotechnology, cybersecurity, critical infrastructure, and other national security dangers…”
Here are the cybersecurity-related provisions in the new directive.
Developing AI Safety and Security (Section 4.1.)
The EO directs the National Institute of Standards and Technology (NIST) to establish guidelines and best practices for developing and deploying safe, secure, and trustworthy AI systems, including addressing:
- Generative AI within the AI Risk Management Framework and the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models, (i.e., AI for potential offensive cyber operations.)
- Evaluating and auditing AI capabilities which could cause harm, such as in the areas of cybersecurity and biosecurity (e.g., creating chemical weapons, or other chemical, biological, radiological, or nuclear (CBRN) weapons.)
- AI red-teaming guidelines, procedures and processes for developers of AI to test the safety, security and trustworthiness of their AI. The EO also directs the Secretary of Energy to develop and implement a plan for developing the DoE’s AI model evaluation tools and AI testbeds.
Ensuring Safe and Reliable AI (Section 4.2)
- Reporting dual-use AI. Companies developing potential dual-use AI foundation models are required to report to the Department of Commerce on their training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats. Companies are also required to report the results of any red-team testing of developed dual-use foundation models, including testing results relating to AI for developing biological weapons or offensive cyber exploits, etc.
- Reporting malicious cyber activities. The Secretary of Commerce is to propose regulations requiring U.S. Infrastructure-as-a-Service (IaaS) providers to report when a foreign person transacts with that IaaS provider to train a large AI model for potentially malicious cyber activities. Commerce is also to determine and maintain the technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity.
- Identifying foreign actors. Commerce is to propose regulations requiring U.S. IaaS providers and their foreign resellers to verify the identity of any foreign person that obtains an IaaS account from the foreign reseller, with the objective of identifying foreign malicious cyber actors using any such products.
AI in Critical Infrastructure and in Cybersecurity (Section 4.3)
- Evaluating risks of AI to CI. The head of each agency with regulatory authority over critical infrastructure, in coordination with the Cybersecurity and Infrastructure Security Agency (CISA), shall evaluate potential risks of the use of AI in critical infrastructure sectors, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber-attacks. The evaluation is to consider ways to mitigate these vulnerabilities.
- AI risk management. The Secretary of Homeland Security is to incorporate NIST’s AI Risk Management Framework, (NIST AI 100-1), as well as other appropriate security guidance, into relevant safety and security guidelines for use by critical infrastructure owners and operators.
- Financial institution risks. The Secretary of the Treasury is to issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.
- AI advisory board. The Secretary of Homeland Security shall establish an Artificial Intelligence Safety and Security Board as an advisory committee that includes AI experts from the private sector, academia, and government to provide advise DHS on improving security, resilience, and incident response related to AI usage in critical infrastructure.
- AI for cyber vulnerability remediation. (ii) The Defense Department and the Department of Homeland Security shall each develop and conduct a pilot project to identify, develop, test, evaluate and deploy AI capabilities to aid in the discovery and remediation of vulnerabilities in critical federal software, systems, and networks. DOD and DHS will report to White House on the results of the pilot projects and any lessons learned on how to deploy AI capabilities effectively for cyber defense. These pilots build on the administration’s ongoing AI Cyber Challenge effort to harness AI capabilities to make software and networks more secure.
Safe Use of Federal Data for AI Training (Section 4.7)
- Mitigating Risks of Open Federal Data. The Chief Data Officer Council shall develop initial guidelines for performing security reviews, including reviews to identify and manage the potential security risks of releasing federal data that could aid in the development of CBRN weapons as well as the development of autonomous offensive cyber capabilities. Agencies shall then use these guidelines to conduct security reviews of all data assets in the federal comprehensive data inventory and address potential security risks with respect to the ways in which that data could be used to train AI systems.
Securing Health Information (Section 8).
- HHS AI Task Force. The Secretary of HHS shall establish an HHS AI Task Force that will develop a strategic plan that includes policies and frameworks on responsible deployment and use of AI in the health and human services sector, including incorporation of safety, privacy, and security standards into the software-development lifecycle for protection of personally identifiable information (PII), as well as measures to address AI-enhanced cybersecurity threats in the health and human services sector.
Guidance for AI Management (Section 10.1)
- AI risk management. OMB is to issue guidance to agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in federal agencies. OMB’s guidance shall specify required minimum risk-management practices for government uses of AI, based on the NIST AI Risk Management Framework, and provide recommendations to agencies regarding external testing for AI, including AI red-teaming for generative AI, to be developed in coordination with the Cybersecurity and Infrastructure Security Agency. NIST, in coordination with OMB, shall develop guidelines, tools, and practices to support implementation of OMB’s minimum risk-management practices.
- Secure use of generative AI. As generative AI products become more widely available, agencies are discouraged from imposing broad general bans on its use. Instead, agencies should advance the responsible and secure use of generative AI in the federal government by limiting access to specific generative AI services based on specific risk assessments and with appropriate safeguards and risk-management practices in place.
Contractor Implications
Throughout the EO industry providers of AI solutions and other technologies impacted by AI (which seems to be all-encompassing) should be prepared for increasing federal scrutiny and oversight, especially when it comes to the cybersecurity of their products and service offerings. The several expanding technical standards specified in the directive will impact contractors and commercial services providers seeking to offer AI capabilities, beginning at the root of AI software and solutions development. Secure by design is even more important when it comes to AI.
The various regulatory elements that are coming out of this EO will continue to put greater logging, auditing and reporting requirements on solutions providers, as transparency will be the rule of the day when it comes to AI development and deployment. The added operational costs may potentially squeeze or scare some providers out of the market or temper the hype factor that has enveloped this emerging technology segment.
Software or services companies that have no plans to offer AI-enabled capabilities are not immune to the impact of this new EO because AI is being used to identify and remedy vulnerabilities in software, systems and networks, raising the bar for solutions providers to ensure that cybersecurity is “baked in” their offerings.