Skip to main content

Advancing the SAFE Innovation in the AI Age Framework — AI: The Washington Report

Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies (MLS). The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates understandably have exponentially increased the federal government’s interest in AI and its implications. In our weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities. Other Mintz and ML Strategies subject matter experts will continue to discuss and analyze other aspects of what could be characterized as the “AI Revolution.”

Today’s report focuses on Senator Chuck Schumer’s (D-NY) announcement of the SAFE Innovation in the AI Age legislative strategy (“SAFE Framework”). Our key takeaways are:

  1. Senator Schumer is committing to moving forward on federal AI legislation. The SAFE Framework, intended to be a bipartisan legislative effort to produce comprehensive legislation on AI, has two components. The first is a set of five principles intended to guide the development of legislation. These principles are security, accountability, protecting our foundations, explainability, and innovation.
  2. The second component of the SAFE Framework is a series of AI Insight Forums, or discussions with AI experts to be hosted by Congress beginning in September. Topics to be covered by these forums include innovation, IP, national security, and privacy.
  3. The announcement of the SAFE Framework comes amidst a flurry of AI-related legislative proposals in the US, and rapid progress towards the implementation ofgovernment-created AI frameworks in jurisdictions promoting AI innovations, such as the European Union and China. 
     

Artificial Intelligence and Regulation Update — SAFE Innovation in the AI Age

At a June 21, 2023 forum hosted by the Center for Strategic and International Studies (“CSIS”), Senate Majority leader Chuck Schumer (D-NY) announced SAFE Innovation in the AI Age (“SAFE Framework”) a legislative strategy to produce bipartisan comprehensive legislation on AI. The SAFE Framework has two components, the first being a set of high-level principles meant to guide the development of AI legislation, and the second being a series of forums with AI experts on a diverse array of topics.

SAFE Framework Principles: Responsible AI Innovation

Senator Schumer claimed that the primary function of US AI legislation should be to encourage innovation. However, the absence of guidelines encouraging the safe development of AI will, in Schumer’s words, “slow AI’s development and prevent us from moving forward.” Against this tension, Senator Schumer’s SAFE Framework sets out five principles that comprehensive AI legislation should embody in order to encourage responsible AI innovation:

  1. Security: The federal government must take proactive measures to prevent hostile actors from misusing AI. Furthermore, legislation must address the prospect of widespread job loss due to AI development.
  2. Accountability: Businesses must be prohibited from deploying AI systems in a fraudulent, discriminatory, or harmful manner. Creators must be given their “due credit and compensation” when automated systems use their IP.
  3. Protecting our Foundations: AI development must be guided so as to ensure that autonomous systems do not undermine democratic values, especially trust in the electoral process. “We should develop the guardrails that align with democracy and encourage the nations of the world to use them,” asserted Schumer.
  4. Explainability: The federal government should encourage developers to make their AI systems transparent. The average user of an AI system should be able to understand how that system produces a given output. Schumer emphasized that explainability requirements must not force companies to reveal sensitive IP.
  5. Innovation: The US government should support US-led advancements in AI technologies aimed at unlocking AI’s extraordinary potential and advancing US leadership in AI.

The principles of the SAFE Framework echo several executive-branch AI R&D frameworks, including the Blueprint for an AI Bill of Rights and the National Artificial Intelligence Research and Development Strategic Plan. These documents, along with Schumer’s SAFE Framework, emphasize the need to encourage AI innovation while preventing potential harm attendant with the technology’s development. However, in contrast to these documents, Schumer’s framework is explicitly aimed at the creation of bipartisan comprehensive AI legislation within a matter of “months.”

SAFE Framework Implementation: AI Insight Forums

To create an omnibus legislative package on AI in a relatively short period of time, Schumer asserted that the “traditional approach of committee hearings” will not suffice — “a new approach is required.” Beginning in September 2023, a bipartisan group of lawmakers, including senators Schumer, Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD), will convene a series of AI Insight Forums.

According to Schumer, through these forums, Congress will invite “top AI developers, executives, scientists, advocates, community leaders, workers, [and] national security experts” to discuss pressing issues in AI regulation. The aim of these forums will be to “forge consensus” on these salient matters so that legislators can “translate these ideas into legislative action.” Senator Schumer announced ten topics to be covered in these forums:

  1. Asking the right questions
  2. AI innovation
  3. Copyright and IP
  4. Use cases and risk management
  5. Workforce
  6. National security
  7. Guarding against doomsday scenarios
  8. AI’s role in our social world
  9. Transparency, explainability, and alignment
  10. Privacy and liability

While Schumer’s SAFE Framework grants a significant role to the AI Insight Forums in developing comprehensive AI legislation, the Senator was careful to stress that these forums would contribute to, rather than replace, the work done by relevant Congressional committees. “Our committees must continue to be the key drivers of Congress’s AI policy response…but hearings won’t be enough. We need an all-of-the-above approach because that’s what AI’s complexity and speed demands.”

As a precursor to the upcoming AI Insight Forums, Senators Schumer, Heinrich, Young, and Rounds had already scheduled a series of senator-only briefings on AI. The first briefing, convened on June 13, concerned the current state of AI technology. The second will be on trends in AI development and strategies to maintain American leadership in AI R&D. The final briefing will concern how US national security agencies are utilizing AI and an assessment of “our adversaries’ AI capabilities.”

In a statement released on June 14, Schumer hailed the first AI briefing as a “huge success.” Characterizing the mood of the assembled senators following the briefing as “a mix of urgency and humility,” Schumer warned that “Congress has only a limited amount of time to stay proactive on Artificial Intelligence ….” The announcement of the SAFE Framework appears to be Senator Schumer’s attempt to act on this urgency, focusing it into a robust bipartisan legislative framework.

The SAFE Framework in an International Context

Senator Schumer’s announcement of the SAFE innovation framework comes amidst a flurry of legislative proposals on AI both at home and abroad. At the time of writing, five bills concerning the regulation of AI have been formally introduced in Congress in June 2023 alone. Despite this productivity, jurisdictions promoting AI innovations, such as China and the European Union, have achieved greater progress towards implementing comprehensive AI legislation.

On April 11, 2023, the Cyberspace Administrator of China (“CAC”), the People’s Republic of China’s (“PRC”) national internet regulator, released draft legislation — Management of Generative Artificial Intelligence Services (“CAC Draft Regulation”) — for public comment. The draft legislation would broadly regulate the “research, development, and use of products with generative AI functions, and…the provision of services to the public” within the PRC.[1]

Similarly, rapid progress has been made in the European Union (“EU”). On June 14, 2023, the day after Schumer’s first AI briefing, the European Parliament approved draft rules for its Artifical Intelligence Act, a proposed comprehensive regulatory framework on AI. The AI Act adopts a “risk-based approach,” differentially regulating uses of AI based on their “risk to the health and safety or fundamental rights of natural persons.” Practices banned under this rubric include “AI-based social scoring for general purposes done by public authorities.” The bill still awaits a number of procedural steps before it is implemented into law and may not come into force until 2025.

At the time of writing, it is too soon to ascertain in what ways an American omnibus AI bill guided by the SAFE Framework would differ from the EU’s AI Act or the CAC Draft Regulation. When asked by Gregory Allen of CSIS how he perceived AI regulation coming out of other jurisdictions, Schumer replied that no currently existing AI legislation has “captured the imagination of the world…most of them were [drafted] quite quick[ly].” In contrast to the PRC’s CAC, Schumer asserts that Congress will adopt a more consultative approach. Schumer hopes that through the SAFE Framework, Congress can produce a piece of AI legislation that will set the standard for much of the world. “We believe that if [American comprehensive AI legislation] is good enough, the rest of the world will follow,” said Schumer.

Conclusion

Senator Schumer encouraged his audience to look towards the “future.” However, an episode from the recent past gives some pause about the viability of the SAFE Framework. Recently, some commentators have noted strong parallels between the recent focus on AI in Congress and the sustained push to implement comprehensive data privacy legislation in the late 2010s. At that time, concern regarding data privacy violations led to a panoply of regulatory proposals, but ultimately no framework garnered the bipartisan support needed to be signed into law.

Perhaps with this recent example in mind, Schumer repeatedly emphasized the need for the Senate to transcend partisan differences and come to a consensus on the issue of AI regulation. “AI is one issue that must lie outside the typical partisan fights of Congress. The changes AI will bring will not discriminate between left, or right, or center.” Whether the increasingly rapid pace of AI development, the warnings of experts, and the sustained effort of the SAFE Framework will be able to overcome partisan divides to produce effective AI legislation is to be seen.

We will continue to monitor, analyze, and issue reports on these developments.

 

Endnotes

[1] A full translation of the draft legislation can be accessed here. A detailed analysis of the draft legislation can be accessed here.

 

Subscribe To Viewpoints

Authors

Bruce D. Sokler

Member / Co-chair, Antitrust Practice

Bruce D. Sokler is a Mintz antitrust attorney. His antitrust experience includes litigation, class actions, government merger reviews and investigations, and cartel-related issues. Bruce focuses on the health care, communications, and retail industries, from start-ups to Fortune 100 companies.

Alexander Hecht

ML Strategies - Executive Vice President & Director of Operations

Alexander Hecht is Executive Vice President & Director of Operations of ML Strategies, Washington, DC. He's an attorney with over a decade of senior-level experience in Congress and trade associations. Alex helps clients with regulatory and legislative issues, including health care and technology.

Christian Tamotsu Fjeld

Senior Vice President

Christian Tamotsu Fjeld is a Vice President of ML Strategies in the firm’s Washington, DC office. He assists a variety of clients in their interactions with the federal government.

Raj Gambhir

Raj Gambhir is a Project Analyst in the firm’s Washington DC office.