Skip to main content

The FTC Is Lurking — AI: The Washington Report

Welcome to this week’s issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies. The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates understandably have exponentially increased the federal government’s interest in AI and its implications. In these weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities. Other Mintz and ML Strategies subject matter experts will continue to discuss and analyze other aspects of what could be characterized as the “AI Revolution.”

We have previously reported on Congressional activity and various proposals for a process leading to regulatory legislation. The prospects and timeline for action on these proposals are murky. So today, we look at the Federal Trade Commission (“FTC”). Even without new, clear Congressional authority, as one of the primary competition and consumer protection authorities in the United States, the FTC is staking its claim to regulate AI. In today’s report, we detail key documents, settlements, and appointments that provide insight into the thought process of the current FTC on AI. Our key takeaways are:

  1. An April 2023 joint statement from leadership in key consumer protection and law enforcement agencies, including the FTC, indicates these agencies’ resolve to extend their enforcement authority into the domain of artificial intelligence.
  1. “Algorithmic disgorgement,” or the enforced deletion of algorithms created using certain classes of data, is an enforcement paradigm that the FTC has repeatedly utilized since 2019, and is considered by one senior FTC official to be a “significant part” of the Commission’s AI regulation approach.
  1. Public guidance provided by the FTC demonstrates the current Commission’s resolve to utilize its statutory authority to regulate “unfair” and “deceptive” practices by entities deploying AI systems.
     

FTC and AI: Actions and Statements During the Biden Administration — AI: The Washington Report

The Joint Statement on Bias in Automated Systems

In April 2023, key officials from the FTC, Consumer Financial Protection Bureau, Department of Justice (“DOJ”), and Equal Employment Opportunity Commission issued a non-enforceable “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.” As leaders of agencies tasked to “protect civil rights, fair competition, consumer protection, and equal opportunity,” the authors of the report assert that their existing “legal authorities apply to the use of automated systems.”

It is important to note that this statement encompasses not just AI, but all “automated systems,” or “software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions.” Given the potential for automated systems to “perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes,” the agency heads assert their agencies’ “responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws.”

Though the statement explicitly notes that it “does not create any new rights or obligations and it is not enforceable,” its discussion of the instances in which automated systems may contribute to unlawful discrimination provides insight into the manner in which these agencies may seek to extend their enforcement actions into the domain of AI. The statement outlines three potential sources of discrimination arising from the use of autonomous systems, including:

  1. Data and Datasets: As a result of being trained on “unrepresentative or imbalanced datasets,” automated systems may produce discriminatory outcomes.
  1. Model Opacity and Access: If an algorithm is not designed to have its structure be transparent to developers and those subject to its outcomes, it may become difficult to ascertain whether the algorithm is fair.
  1. Design and Use: Developers may design an algorithm with an incomplete understanding of the context in which their tool will be used, leading to potentially harmful results.

Algorithmic Disgorgement: A New FTC Enforcement Paradigm for the Age of AI?

Entities that produce proprietary algorithms through user data collection should be aware of the FTC’s deployment of a novel remedy called “algorithmic disgorgement.” As defined in a paper written by FTC Commissioner Rebecca Kelly Slaughter, the premise behind algorithmic disgorgement is that “when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it.” In other words, if the FTC determines that an entity has illegally collected user data, the Commission may include as a remedy in a settlement the deletion of not just the data itself, but all algorithms trained on that data. “The authority to seek this type of remedy,” argues Commissioner Slaughter, “comes from the Commission’s power to order relief reasonably tailored to the violation of the law.”

The FTC first applied this remedy in its 2019 settlement with Cambridge Analytica. In this settlement, the Commission ordered the firm to delete “any algorithms or equations, that originated, in whole or in part, from” the data that allegedly had been illegally collected from Facebook users.[1] During the Biden administration, this remedy has been applied in two further settlements and one proposed order in a pending matter.[2] Through these settlements, the concept of algorithmic disgorgement has gained a more definite form, increasing the likelihood that it will be deployed in the future.

FTC Settlement with Everalbum Inc.

In January 2021, the FTC announced that it had settled with Everalbum Inc., the developer of the photo storage app “Ever.” The FTC alleged that the firm “deceived consumers about its use of facial recognition technology and its retention of the photos and videos of users who deactivated their accounts,” a misrepresentation in violation of the FTC Act. As part of the settlement, finalized in May 2021, the FTC compelled Everalbum Inc. to delete all “Affected Work Product,” or “any models or algorithms developed in whole or in part using Biometric Information [that Everalbum Inc.] collected from Users of the ‘Ever’ mobile application.”

Though the principle of algorithmic disgorgement had already been applied in the 2019 Cambridge Analytica settlement, the 2021 settlement with Everalbum saw the introduction of the term “Affected Work Product” to designate the algorithms and models to be deleted. The Commission would again utilize the “Affect Work Product” concept in its 2022 settlement with WW International Inc. and Kurbo Inc.

FTC Settlement with WW International Inc. and Kurbo Inc.

In February 2022, the Department of Justice (“DOJ”) filed a complaint against WW International Inc. and its subsidiary Kurbo Inc. on behalf of the FTC. The complaint alleged that the defendants had committed multiple violations of the FTC Act and the Children’s Online Privacy Protection Act (“COPPA”) through “Kurbo by WW,” a weight loss app for children. COPPA mandates that operators of online services retain personal information collected from children “for only as long as is reasonably necessary to fulfill the purpose for which the information was collected.” In their complaint, the DOJ and FTC alleged that the defendants were in violation of this and other COPPA requirements.

Mirroring the language of the Everalbum Inc. settlement, the FTC and DOJ mandated WW International Inc. and Kurbo Inc. to delete any “Affected Work Product,” or “any models or algorithms developed in whole or in part using Personal Information Collected from Children through the Kurbo Program.”

FTC Proposed Order with Edmodo LLC

In May 2023, the FTC obtained an order against education technology company Edmodo LLC for alleged violations of COPPA. These alleged violations include the collection of students’ personal information without parental authorization. The Commission’s proposed stipulated order includes a requirement to “delete or destroy any Affected Work Product,” including “any models or algorithms developed in whole or in part using Personal Information Collected from Children through the Edmodo Platform without Verifiable Parental Consent or School Authorization.”

As autonomous systems increasingly power the global economy and privacy concerns surrounding the data collected to train these AI models continue to mount, it is conceivable that the FTC will continue to include algorithmic disgorgement orders in settlements. Key figures in the FTC have indicated as much in recent statements. In July 2023, the acting associate director of the FTC’s Division of Privacy and Identity Protection called algorithmic disgorgement a “significant part” of the Commission’s AI regulation strategy.

FTC Business Guidance on AI

Since the beginning of the Biden administration, the FTC has released a series of business guidance blog posts on AI.

The FTC’s Claim to Regulating AI

In April 2021, Elisa Jillson of the FTC Division of Privacy and Identity Protection released a blog post entitled “Aiming for truth, fairness, and equity in your company’s use of AI.” In this blog post, Jillson argues that there are at least three laws that grant the FTC jurisdiction over AI-related issues:

  1. Section 5 of the FTC Act: Selling or using a racially biased algorithm may constitute a violation of this act’s prohibition on unfair or deceptive practices.
  1. Fair Credit Reporting Act: Deploying an algorithm that denies individuals employment, housing, credit, insurance, or other benefits on a discriminatory basis may constitute a violation of this act.
  1. Equal Credit Opportunity Act: Using a biased algorithm that results in credit discrimination on the basis of the act’s protected categories may constitute a violation of this act.

To avoid falling under FTC scrutiny, Jillson encourages companies deploying AI systems to not train algorithms on biased datasets, to periodically audit their algorithms for discriminatory outcomes, and to employ “transparency frameworks and independent standards.”

A Warning Regarding False Claims on AI

In February 2023, Michael Atleson of the FTC Division of Advertising Practices released a blog post entitled “Keep your AI claims in check.” Warning marketers of AI products that “for FTC enforcement purposes… false or unsubstantiated claims about a product’s efficacy are our bread and butter,” Atleson urges advertisers “not to overpromise what your algorithm or AI-based tool can deliver.” Specifically, Atleson asserted that advertisers not exaggerate the features of an AI product, baselessly assert that an AI product performs a function better than a non-AI product, claim that a product uses AI when it, in fact, does not, or put an AI product to market without knowing about the product’s “reasonably foreseeable risks and impact.”

Using AI for Deceptive Purposes

In March 2023, Atleson released a blog post entitled, “Chatbots, deepfakes, and voice clones: AI deception for sale.” This article discusses the use of AI tools to disseminate intentionally misleading or fraudulent content. As argued by Atleson, the “FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose.” To avoid being subject to FTC scrutiny on this point, Atleson offers designers of AI systems four guidelines:

  1. Prior to releasing an AI tool, consider the “reasonably foreseeable” harmful and fraudulent use cases to which the tool could be applied.
  2. Take “reasonable measures to prevent consumer injury” prior to releasing an AI tool to market.
  3. Do not over-rely on post-release detection tools.
  4. Do not use an AI tool to mislead consumers.
     

Using AI for Unfair Purposes

In May 2023, Atleson published another blog post entitled, “The Luring Test: AI and the engineering of consumer trust.” In this post, Atleson distinguishes cases of AI related deception, such as “exaggerated and unsubstantiated claims for AI products” from unfair AI practices. As Atleson puts it, a practice is unfair “if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.”

With regard to potentially unfair AI practices, a “key FTC concern” is companies using generative AI tools “in ways that, deliberately or not, steer people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment.” Atleson argues that such practices could constitute a violation of the FTC Act, regardless of whether the classes of people impacted are protected by anti-discrimination laws.

Another potentially unfair AI practice pointed out by Atleson is the placement of advertisements within generative AI systems. As Atleson puts it, “It should always be clear that an ad is an ad… any generative AI output should distinguish clearly between what is organic and what is paid.”

Competition Concerns Surrounding Generative AI

The FTC has also produced articles indicating the Commission’s perspective on issues relating to autonomous systems. Late last month, staff in the FTC’s Bureau of Competition and Office of Technology released a blog post entitled “Generative AI Raises Competition Concerns” This joint blog post argues that “control over one or more of the key building blocks that generative AI relies on,” by large incumbents, “could affect competition in generative AI markets.” These key building blocks identified by the authors of the blog post include:

  1. Data: Since companies rely on substantial amounts of data to train and refine AI models, large incumbents with accumulated reserves of proprietary data may come to possess a competitive advantage over newcomers to the generative AI market. As such, “companies’ control over data may…create barriers to entry or expansion that prevent fair competition from fully flourishing.”
  2. Talent: Given the complexity of AI systems, a highly trained workforce is needed to maintain and improve generative AI models. “Since requisite engineering talent is scarce, powerful companies may be incentivized to lock in workers and thereby stifle competition from actual or would-be rivals.” To prevent this outcome, the authors of the blog post call for “talented individuals with innovative ideas [to] be permitted to move freely,” unhindered by restraints such as non-compete agreements.
  3. Computational Resources: The development of generative AI models also requires significant computational resources, either in the form of specialized chips or cloud computing services. This requirement contributes to a “high cost of entry” to the generative AI market, potentially stifling competition.

By controlling one or more of these key inputs, argue the authors, market incumbents can “use unfair methods of competition to entrench their current power or use that power to gain control over a new generative AI market.” Given this possibility, the authors assert that the “Bureau of Competition, working closely with the Office of Technology, will use our full range of tools to identify and address unfair methods of competition.”

We expect the FTC’s AI activity to continue and only increase. We will continue to monitor, analyze, and issue reports on these developments.

 

Endnotes

[1] Specifically through the Facebook application “GSRApp.”
[2] The Trump administration’s FTC announced that Everalbum Inc. settled with the Commission on January 11, 2021. The Biden administration’s FTC finalized the settlement on May 7, 2021.

 

Subscribe To Viewpoints

Authors

Bruce D. Sokler

Member / Co-chair, Antitrust Practice

Bruce D. Sokler is a Mintz antitrust attorney. His antitrust experience includes litigation, class actions, government merger reviews and investigations, and cartel-related issues. Bruce focuses on the health care, communications, and retail industries, from start-ups to Fortune 100 companies.

Alexander Hecht

ML Strategies - Executive Vice President & Director of Operations

Alexander Hecht is Executive Vice President & Director of Operations of ML Strategies, Washington, DC. He's an attorney with over a decade of senior-level experience in Congress and trade associations. Alex helps clients with regulatory and legislative issues, including health care and technology.

Christian Tamotsu Fjeld

Senior Vice President

Christian Tamotsu Fjeld is a Vice President of ML Strategies in the firm’s Washington, DC office. He assists a variety of clients in their interactions with the federal government.

Raj Gambhir

Raj Gambhir is a Project Analyst in the firm’s Washington DC office.