Skip to main content

DOJ Deputy Chief Announces “Stiffer Sentences” for AI-Related White-Collar Crime — AI: The Washington Report

  1. Department of Justice (DOJ) Deputy Attorney General Lisa Monaco’s March 7, 2024, speech addressed the agency’s current enforcement stance towards AI.
  2. In the speech, Monaco reiterated that there are no AI exemptions to the laws on the books, announced that the DOJ will be seeking “stiffer sentences” for AI-augmented crimes, and stated that the agency will consider firms’ ability to manage AI risk in its Evaluation of Corporate Compliance Programs.
  3. Monaco’s speech is part of a broader process whereby enforcement agencies have applied existing regulatory authority to novel uses of AI. Given these developments, firms utilizing AI should take special care to ensure that they are in proper compliance with the law.  
     

  
On March 7, 2024, Deputy Attorney General of the Department of Justice (DOJ) Lisa Monaco delivered a keynote address at the American Bar Association’s National Institute on White Collar Crime. While wide-ranging, including commentary on enhanced incentives for whistle-blowers, Monaco’s speech included a substantial discussion of the DOJ’s evolving enforcement approach in relation to AI. Monaco’s address reinforces the impression made by recent developments that enforcement agencies are paying particularly close attention to pursuing potential violations of the law abetted by AI and that firms should take special care to ensure regulatory compliance when integrating AI systems.

Monaco Reiterates That There Are No AI Exemptions

Monaco began her remarks on AI with an oft-stated viewpoint on the topic of AI: that the technology will be used for both good and ill ends. “All new technologies are a double-edged sword — but AI may be the sharpest blade yet,” asserted Monaco. “It holds great promise to improve our lives — but great peril when criminals use it to supercharge their illegal activities, including corporate crime.” Given the risks posed by the development of AI, Monaco asserted that the DOJ will be “using our tools in new ways to address them.”

Part of the DOJ’s AI enforcement strategy will be using existing tools in new ways because, as asserted repeatedly by Federal Trade Commission (FTC) Chair Lina Khan, “there’s no AI exemption to the laws on the books.” In other words, the use of AI does not protect otherwise illegal conduct from condemnation. To illustrate the point, Monaco asserted that fraud “using AI is still fraud. Price fixing using AI is still price fixing. And manipulating markets using AI is still market manipulation.”

Each of the three types of illegal conduct mentioned by Monaco has been of particular interest to regulators concerned with AI.

  1. Fraud: AI-augmented fraud is a topic that has particularly interested the FTC. Since early 2023, the FTC has regularly posted business guidance warning firms that they should neither make false claims about their AI products and services, nor commit AI-augmented fraud. To combat the latter category of conduct, the FTC proposed a rule in mid-February 2024 that would hold AI companies liable for “deepfake” impersonation scams[1] conducted using their platforms
  2. Price fixing: The emergence of AI pricing algorithm services has allowed sellers to access suggested prices for their assets on the basis of relevant variables. One domain in which the use of pricing algorithms has made significant inroads is the property rental market. The FTC and DOJ have expressed strong opposition to this development on the basis that competing landlords’ use of a pricing algorithm constitutes a violation of the Sherman Act’s prohibition on price fixing. The agencies’ opposition to the use of pricing algorithms in the property rental market in part stems from the potential for this practice to raise rental prices for low-income consumers.
  3. Market Manipulation: At the time of writing, a case has not been brought against an individual or business entity for engaging in AI-assisted market manipulation. However, the potentially destructive consequences of this conduct have worried lawmakers and regulators. In May 2023, an AI-generated doctored image of a fake terrorist attack temporarily plunged markets. To address the threat of AI-driven market manipulation, in December 2023, a bipartisan group of senators introduced a bill that would allow the Securities and Exchange Commission to seek treble penalties for violations “involving the use of machine-manipulated media…”

“Stiffer Sentences” for Certain AI-Augmented Crimes Announced by Monaco

In her address, Monaco announced that the DOJ would seek higher penalties for AI-related white-collar crimes committed by individuals and corporations. Monaco argued that since AI poses great risks to the public, and the DOJ has “long used sentencing enhancements to seek increased penalties for criminals whose conduct presents especially serious risks to their victims and to the public at large…Where AI is deliberately misused to make a white-collar crime significantly more serious, our prosecutors will be seeking stiffer sentences — for individual and corporate defendants alike.”[2]

It has long been clear that federal enforcement agencies are paying closer attention to AI-related misconduct. However, Monaco’s announcement that the DOJ would be seeking higher penalties for certain AI-augmented crimes is a significant development. It is important to note that the courts have not yet endorsed this enforcement doctrine, and the extent of these “stiffer sentences” has yet to be seen. Regardless, in light of this announcement, corporate officers should take particular care to ensure that their firms are utilizing AI in a manner that complies with applicable regulations.

DOJ Will Consider Ability to Manage AI Risk in Overall Compliance Review

Monaco concluded her remarks on AI by announcing that DOJ prosecutors will now be considering a firm’s ability “to manage AI-related risks as part of its overall compliance efforts.” She justified this development by arguing that as DOJ reviews of “a company’s compliance program” concern “how well the program mitigates the company’s most significant risks,” and nowadays, for many firms, their most significant risks involve AI, DOJ reviews of corporate compliance plans must necessarily take AI into consideration.

“I have directed the Criminal Division to incorporate assessment of disruptive technology risks,” explained Monaco, “— including risks associated with AI — into its guidance on Evaluation of Corporate Compliance Programs.” In the wake of this announcement, we should expect to see an update of the DOJ Criminal Division’s guidance on the Evaluation of Corporate Compliance Program in the coming weeks or months.

Conclusion: The DOJ and FTC as “Sheriffs” of the “AI Wild West”?

The rapid popularization of powerful commercial generative AI tools in late 2022 produced a regulatory and legislative chasm. Suddenly, lawmakers, regulators, and judges were faced with novel AI uses, some of which posed a threat to consumer safety, financial stability, and even the democratic process. Over the past year-and-a-half, federal lawmakers have not had success in formulating comprehensive AI regulation, and it now appears unlikely that such regulation will be forthcoming in this Congress. Despite this, the use (and abuse) of AI tools will only accelerate.

Into this regulatory chasm, the executive branch and its associated enforcement agencies have stepped in. President Biden issued an executive order on AI which enacted sweeping changes across the federal bureaucracy. And, as we have discussed at length in this newsletter series, enforcement agencies have sought to utilize their existing authority to reign in novel AI abuses.

Chief among the agencies seeking to become the “sheriffs” of the “AI Wild West” are the FTC and DOJ. Both agencies have attempted to establish themselves as premier AI regulators and have gone to great lengths to apply existing regulatory authority to the domain of AI. Monaco’s March 2024 speech marks another step in the process by which these two agencies have attempted to legitimize themselves as leading AI regulators in the United States.

There are those who have spoken out against the steps taken by the FTC and DOJ to assert their authority over AI, worrying that these enforcement initiatives may be misguided or might stifle innovation. It is also important to note that several of the positions put forth by Monaco in her March 2024 speech, such as the DOJ’s stance on algorithmic price fixing, have not been enforced by the courts.

Regardless, firms should pay careful attention to the strict enforcement stance taken by the DOJ regarding AI and ensure that they remain in compliance with the law. We will continue to monitor, analyze, and issue reports on the pronouncements and initiatives of the DOJ and FTC. Please feel free to contact us if you have questions as to current practices or how to proceed.

 

Endnotes

[1] Deepfakes are doctored images, videos, or recordings that make it appear as though an individual is saying or doing something that they did not actually say or do.
[2] Emphasis added.

 

Subscribe To Viewpoints

Authors

Bruce D. Sokler

Member / Co-chair, Antitrust Practice

Bruce D. Sokler is a Mintz antitrust attorney. His antitrust experience includes litigation, class actions, government merger reviews and investigations, and cartel-related issues. Bruce focuses on the health care, communications, and retail industries, from start-ups to Fortune 100 companies.

Alexander Hecht

ML Strategies - Executive Vice President & Director of Operations

Alexander Hecht is Executive Vice President & Director of Operations of ML Strategies, Washington, DC. He's an attorney with over a decade of senior-level experience in Congress and trade associations. Alex helps clients with regulatory and legislative issues, including health care and technology.

Christian Tamotsu Fjeld

Senior Vice President

Christian Tamotsu Fjeld is a Vice President of ML Strategies in the firm’s Washington, DC office. He assists a variety of clients in their interactions with the federal government.

Raj Gambhir

Raj Gambhir is a Project Analyst in the firm’s Washington DC office.