Senate Leaders Introduce AI Regulation Framework

Published: September 14, 2023

Federal Market AnalysisArtificial Intelligence/Machine LearningInformation TechnologyPolicy and Legislation

Among the elements of the framework is a requirement to license AI models with an independent oversight body.

Policymakers are eager to formulate regulations around artificial intelligence (AI). Recognizing the need to enhance AI innovation, while keeping in mind associated privacy and security risks, Congress is looking to understand how AI operates to move forward with governing the technology.

Members of industry and academia are also eager to see lawmakers put in place policies to regulate AI systems. “To bring AI within the rule of law, lawmakers must go beyond half measures to ensure that AI systems and the actors that deploy them are worthy of our trust…The AI landscape is at a crossroads and now is the time to act. The harms of AI are real, significant, and becoming both entrenched and normalized by the day.  If we do not impose rules to limit abuses of power, we risk eroding our civil liberties, our civil rights, and our democracy itself,” according to a testimony by Woodrow Hartzog of the Boston University of Law.

On Tuesday, the U.S. Senate Subcommittee on Privacy, Technology and the Law held a hearing to discuss the influences and risks of AI and effective steps to regulate it, as well as introduce a new AI regulation framework. Introduced by Senators Richard Blumenthal and Josh Hawley, the Bipartisan Framework for U.S. AI Act provides a first glimpse into an overarching AI oversight structure. Elements of the framework include:

At the hearing, Brad Smith, Vice Chair and President at Microsoft, praised the framework, calling it the “safety brakes” for AI models. “At their core, these laws should require AI systems to remain subject to human control at all times, and ensure that those who develop and deploy them are subject to the rule of law,” added Smith.

Also this week, Senators Amy Klobuchar, Susan Collins, Chris Coons and Josh Hawley, introduced The Protect Elections from Deceptive AI Act to prohibit the use of AI to generate deceptive content falsely depicting candidates in political ads that influence federal elections.

Meanwhile, Senator Chuck Schumer has begun to host AI Insight Forums with industry and stakeholders to understand how AI operates and how best to regulate it.

At the White House, the Biden Administration continues to secure voluntary commitments from AI companies to mitigate AI risk. Eight additional companies: Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability, have pledged to ensure products are safe, securely built, and can establish user trust.

Still to come is policy guidance from OMB to federal agencies with regulation on the development, procurement, and use of AI systems. According to an article by Federal News Network, the guidance is expected to have ten requirements, including the creation of a chief AI officer and an AI governance board, to provide direction in use of the technology.