Skip to main content

Three Trends in AI Regulation in 2023 — AI: The Washington Report

Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.

The year 2023 has been a significant one for artificial intelligence and AI policy. With the emergence of powerful and easily accessible generative AI tools in late 2022, regulators in the United States and beyond have spent this year attempting to simultaneously familiarize themselves with the ins and outs of complex autonomous systems and ascertain how to best regulate these technologies. There have been numerous hearings in Congress, which we have discussed. Several agencies, notably the Federal Trade Commission, have taken steps to use existing powers to address AI issues. President Biden issued an AI Executive Order, which is in the process of being implemented.

The sheer pace of developments renders a panoptic overview of developments in AI regulation this year unhelpful, if not infeasible. Instead, we highlight three broad trends in AI regulation for the year 2023.

Trend #1: Enforcement agencies have been scrambling to assert AI enforcement authority, with the FTC leading the way.

“Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices,” asserts an April 2023 joint statement from the heads of key consumer protection agencies. This assertion neatly encapsulates a range of actions on AI taken by various federal enforcement agencies in 2023.

As experts have noted, the development of generative AI has raised novel social, political, and economic difficulties, ranging from the prospect of massive shifts in the labor force to the rapid dissemination of tailored misinformation. Lawmakers have expressed the urgency of responding to these threats, but AI regulation from Congress has so far not been forthcoming. To respond to AI harms, therefore, enforcement agencies have had to assert that their existing authorities apply to novel AI uses.

The agency that has dived in with the most vigor is undoubtedly the Federal Trade Commission (“FTC” or “Commission”). As we have covered this year, the FTC has, through business guidance posts, op-eds in popular publications, speeches, and panels, asserted that it “is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition,” as put by Chair Lina Khan.

Words turned to action when, in August 2023, the FTC brought a first-of-its-kind individual case involving AI. We expect this first tentative step towards AI regulation by the Commission to be followed by more enforcement actions in the new year.

While the FTC has been out front, other agencies, including the Consumer Financial Protection Bureau and the Securities and Exchange Commission, are beginning to conduct informational inquiries and engage in rulemaking to establish themselves as AI regulators within their respective fields.

In 2024, we anticipate that enforcement agencies will continue to define and assert their AI enforcement authority.

Trend #2: Lawmakers in the United States have been making slow but appreciable progress towards formulating legislation that addresses AI.

In July 2023, Senate Majority Leader Chuck Schumer (D-NY) announced his intention to secure comprehensive AI legislation within “months.” While this time frame will likely pass without any legislation, Congress has been very active on the subject of AI legislation since the beginning of the year.

As shown by our Mintz AI Legislation tracker, in May 2023, lawmakers began to introduce AI legislation at a rapid pace, a tempo that has not slowed over seven months later. Many of these bills are piecemeal in nature, seeking to address AI-related harms in discrete domains such as copyright, national defense, or election security. However, as the year has progressed, there have been a few developments signaling progress towards the passage of comprehensive AI regulation in the United States.

The first has been Leader Schumer’s AI Insight Forums. Announced in June 2023 and led by a bipartisan steering group of top senators, these AI Insight Forums are intended to educate members of Congress on key AI issues to the end of facilitating the development of comprehensive AI legislation. The final of the nine-forum series was held on December 6, 2023. The forums have covered topics ranging from “Copyright and IP” and “AI Innovation” to “AI’s role in our social world” and “Guarding against doomsday scenarios.”

A second parallel effort has been spearheaded by Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), respectively, the Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Blumenthal and Hawley released a bipartisan framework on AI legislation in September 2023. This proposal, which would establish a licensing regime for AI models “used in high-risk situations,” represents the first major attempt by federal lawmakers to enact a concrete regulatory framework on AI.

In the coming year, we anticipate that members of Congress will continue to work through their committees to develop comprehensive AI legislation. While it is unclear whether Congress will ultimately succeed in passing such a measure, it is clear that lawmakers on both sides of the aisle will continue to make AI regulation a top legislative priority in 2024.

Trend #3: International developments have been impacting the development of AI regulation in the United States.

The pace with which lawmakers in the United States are pursuing comprehensive AI legislation stems not just from the potential and actual harms caused by rapid AI development but also from AI-related regulatory developments taking place in other jurisdictions.

The most significant development in global AI regulation in 2023 has been the progress of the European Union’s AI Act. On December 8, 2023, negotiators from the European Parliament reached a deal on the AI Act, a piece of AI legislation first proposed in April 2021. As discussed in previous newsletters, the AI Act utilizes a “risk-based approach,” regulating AI uses on the basis of their “risk to the health and safety or fundamental rights of natural persons.” Certain uses, such as AI-based social scoring conducted by public authorities, are banned outright, while “high-risk AI systems” would be subject to “specific restrictions and safeguards.” As a European Council (“EC”) press release puts it, “the higher the risk, the stricter the rules.”

Due to the size of the European market, the AI Act will influence global AI standards in a manner determined by the European Union. This influence is not an incidental feature of the regulation. The EC press release asserts that once implemented, the AI Act will “set a global standard for AI regulation in other jurisdictions…thus promoting the European approach to tech regulation in the world stage.”

Given the stated intent of US lawmakers and regulators to have America lead the way in AI regulation, the progress of the EU’s AI Act has contributed to the sense of urgency in Washington surrounding the need to pass comprehensive AI legislation. “The E.U. agreement shows,” asserted Leader Schumer in a December 8, 2023 statement, “that the U.S. cannot sit on the sidelines in the race for A.I.”

What will happen in 2024?

We are skeptical about anyone confidently predicting what 2024 will bring. The pace and range of regulatory responses to AI seen in the past year make prognostication about developments to come in 2024 elusive. That being said, if trends continue, we can reasonably expect to see more aggressive enforcement activity from regulatory agencies, further progress towards the creation of comprehensive AI law in Congress, activity in many executive branch agencies responding to deadlines embedded in the Biden AI Executive Order, and the continuing interplay between the developments in AI regulation in the United States and those occurring in other jurisdictions.

Through every twist and turn, we will continue to provide weekly commentary on the latest in AI regulation. We hope that you have a happy holiday season and New Year.


Subscribe To Viewpoints


Bruce D. Sokler

Member / Co-chair, Antitrust Practice

Bruce D. Sokler is a Mintz antitrust attorney. His antitrust experience includes litigation, class actions, government merger reviews and investigations, and cartel-related issues. Bruce focuses on the health care, communications, and retail industries, from start-ups to Fortune 100 companies.

Alexander Hecht

ML Strategies - Executive Vice President & Director of Operations

Alexander Hecht is Executive Vice President & Director of Operations of ML Strategies, Washington, DC. He's an attorney with over a decade of senior-level experience in Congress and trade associations. Alex helps clients with regulatory and legislative issues, including health care and technology.

Christian Tamotsu Fjeld

Senior Vice President

Christian Tamotsu Fjeld is a Vice President of ML Strategies in the firm’s Washington, DC office. He assists a variety of clients in their interactions with the federal government.

Raj Gambhir

Raj Gambhir is a Project Analyst in the firm’s Washington DC office.