Commerce Continues Pivotal Role in AI Utilization Guidelines

Published: May 21, 2024

Federal Market AnalysisArtificial Intelligence/Machine LearningDOCInformation Technology

The National Institute of Standards and Technology (NIST) has a leading role in developing AI guidelines.

Six months have passed since President Biden issued Executive Order 14110 on the “Safe, Secure, and Trustworthy Artificial Intelligence." While the directive assigned roles to multiple agencies, the Department of Commerce received the lion’s share.

This article summarizes the Department of Commerce's roles in the federal AI journey.

The National Institute of Standards and Technology (NIST) is leading the development of frameworks, guidance, evaluation benchmarks, testing environment, Generative AI development and deployment process guidelines, and progress reports on those activities. The Bureau of Industry and Security (BIS) is developing safety measures for future AI models following the Defense Production Act, requiring developers to report their testing methodologies and protection processes against potential theft. The National Telecommunications and Information Administration (NTIA) is assessing the risks and benefits associated with open-sourced AI-generated information and materials and the impact on innovation and competition. And, finally, the U.S. Patent and Trademark Office (USPTO) is developing guidelines and frameworks for protecting intellectual property during the patent and copywriting processes. The Department recently provided an overview of its progress toward the mandated actions. 

National Institute of Standards and Technology: To date, NIST has published the U.S. Artificial Intelligence Safety Institute Vision, Mission and Strategy Goals and the following four draft documents in response to the EO:

  • Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
  • NIST SP 800-218A, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models
  • NIST AI 100-5, A Plan for Global Engagement on AI Standards
  • NIST AI 100-4, Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency.

The bureau also developed the Roadmap for the AI Risk Management Framework (AI RMF 1.0) shown below and an AI RMF Playbook as companions to the framework. NIST also created a GenAI Evaluation Program including a GenAI Pilot Challenge designed to measure and understand system behavior in discriminating between synthetic and human-generated content. It also established the Trustworthy & Responsible AI Resource Center website and published industry perspectives on the NIST AI Risk Management Framework and multiple RMF AI Crosswalks.

 

 

 

 

 

 

 

 

 

 

Bureau of Industry and Security: In January, BIS issued a Request for Comments on a proposed rule to protect U.S. cloud computing services from malicious cyber activities. The rule requires U.S. providers of Infrastructure as a Service (IaaS) services and products and their foreign resellers to verify their customers’ identities. The rule also requires those providers to report to the Secretary of Commerce when they know they will engage or have engaged in a transaction with a foreign person that could allow that person to train a large AI model with capabilities for potential use in malicious cyber-enabled activity. Comments on the Notice were due April 29, 2024, but BIS had not released the results as of this article’s publication.

National Telecommunications and Information Administration: The NTIA issued a Federal Register Request for Comments (RFC) on an AI Accountability Policy in mid-April including questions regarding AI governance methods to establish accountability related to AI system risks and harmful impacts. The bureau received more than 1,440 comments and published the AI Accountability Policy Report proposing a conceptual AI Accountability Chain focusing on how documentation, disclosures, and access support independent evaluations leading to consequences. The model below includes eight major policy recommendations discussed in the bureau’s in-depth report review.

 

 

 

 

 

 

 

 

 

The NTIA also issued an RFC on the Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights. This addressed the potential risks, benefits, other implications, and appropriate policy and regulatory approaches to these models. The bureau received 332 written responses. Additionally, the bureau requested funding in FY 2025 to create an AI and Emerging Technologies Policy Lab (APL) to develop capabilities to meet emerging technology policy challenges.

U.S. Patent and Trademark Office: The USPTO released an RFC on the Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the United States Patent and Trademark Office highlighting six key considerations:

  • Duty of Candor and Good Faith
  • Signature Requirement and Certifications
  • Confidentiality of Information
  • Foreign Filing Licenses and Export Regulations
  • USPTO Electronic Systems
  • Duties Owed to Clients.

The Director of the USPTO, Katherine Vidal issued a subsequent memo addressing the Applicability of Existing Regulations as to Party and Practitioner Misconduct Related to the Use of Artificial Intelligence. The memo clarified existing rules and procedures and emphasized the importance of compliance concerning AI-generated information. 

The bureau also released the Inventorship Guidance for AI-Assisted Inventions, which explains that while AI-assisted inventions are not ineligible for patents, the inventorship analysis should focus on human contributions. The guidance provides procedures and specific examples for determining whether a natural person contributed significantly to the invention. Comments were due May 13, and the bureau has not published the results.

NIST, BIS, NTIA, and the USPTO publications related to Executive Order 14110 and previous AI-related publications are available on the individual bureau websites.

Contractor Implications

Federal contractors can expect evolving contract implications as the government continues addressing AI utilization and associated cybersecurity risk management and defining related Federal Acquisition Regulations. Additional regulations will be woven throughout the acquisition process from market research to contract performance information. For additional information on the impact on federal contractors, see Deltek’s What the New AI Means for Contractors, Cybersecurity Elements in the New Artificial Intelligence Executive Order, and Cloud Technology in the New Executive Order on Artificial Intelligence.

While this article focuses on the Department of Commerce contributions to the AI standards, a summary of other federal agency activities through March 28 is available here

This article has been updated to include the publication of the U.S. Artificial Intelligence Safety Institute Vision, Mission and Strategy Goals.