Skip to main content

FTC Complaint Against Rite Aid Signals New Paradigm for Evaluating AI Technology Use

*First in a series

Late last month, the Federal Trade Commission (FTC) filed a complaint for permanent injunctive relief and sanctions against Rite Aid, alleging that the pharmacy chain’s widespread use of facial recognition technology, deployed to reduce the risk of shoplifting and other criminal activity in its stores, violated Section 5 of the FTC Act, and was characterized by an effectively wholesale failure to prevent reasonably foreseeable injury to consumers. A fast follow to the FTC’s November 21, 2023 adoption of a resolution to streamline FTC staff’s ability to issue civil investigative demands (CIDs) by preapproving the use of a compulsory process in investigations relating to AI, the complaint is evidence of the FTC making good on its vow to use its authority to protect individuals’ rights when violations occur through “advanced technologies.”  It also affirms that the FTC is continuing to plant an early and firm stake as a leading voice in the evolving AI technology legal landscape. The Rite Aid complaint makes clear that the FTC views its authority to penalize businesses’ alleged misuse or improper utilization of AI as broad and far-reaching. 

RiteAid’s Use of Facial Recognition Technology 

According to the complaint, Rite Aid’s use of facial recognition technology in its stores spanned a period of nearly eight years, beginning in October 2012 and continuing until July 2020. The technology relied on an “enrollment database” populated with images of individuals from law enforcement or media reports, and from individuals whom Rite Aid identified as having previously engaged in actual or attempted criminal activity in its stores. Cameras installed in Rite Aid’s stores captured live images of customers as they moved through the stores, and the facial recognition technology would compare the live images of customers to the enrollment images in the database to identify potential matches to enrolled individuals. If the technology detected a match, an alert was generated and sent to store employees with instructions ranging from observing and monitoring the individual while in the store to calling the police to have the individual removed from the premises. 

The FTC alleged that throughout the duration of its use by Rite Aid, the facial recognition technology generated thousands of false-positive matches and resulted in numerous complaints by customers of being followed around the store, confronted and harassed by employees, wrongfully denied service, and even wrongful arrests. The FTC specifically alleged that the harms caused by the largely inaccurate matching system had a disproportionate impact on racial minorities and women. The FTC concluded that Rite Aid’s allegedly lackluster administration of the facial recognition program was so egregious as to constitute unfair business practices in violation of the Federal Trade Commission Act (FTCA). 

Key Failures by Rite Aid in the Eyes of the FTC

The FTC identified several key failures by Rite Aid that culminated in the breach of its duty to its customers to protect them from reasonably foreseeable harm. First, Rite Aid allegedly failed to engage in appropriate due diligence when selecting its AI technology vendors for the program, including inquiring as to the accuracy of the technology in identifying matches. According to the complaint, virtually no pre-rollout accuracy testing was performed by either the third-party vendor or Rite Aid, and the enrollment images which Rite Aid uploaded into the database largely failed to meet the image quality standards recommended by vendors to improve the technology’s accuracy. Rite Aid also allegedly failed to establish periodic monitoring and review of the technology, and despite being alerted to the high occurrence of false-positive matches within the first few years of its use failed to take any corrective measures to remediate the defects in the database or with the technology itself. Finally, the complaint alleges that Rite Aid failed to adequately train its store-level employees to properly utilize and understand the technology and respond appropriately to match alerts. 

The FTC alleges that Rite Aid’s top to bottom governance failures while deploying its facial recognition technology resulted in a high risk—known to Rite Aid—that the technology would generate false positives, and that employees would respond to false positives in a manner resulting in undue embarrassment, deprival of needed medications, wrongful restraint, or wrongful arrest of its customers. In the settlement agreement reached between the FTC and Rite Aid, Rite Aid is prohibited from using any kind of facial recognition technology for a period of five years and may not renew its use of the technology until it has developed reasonable procedures for testing, data integrity and governance, and vendor and employee oversight. 

The FTC’s action against Rite Aid sends a clear message about exposure risks associated with the growing use of AI technology. A business’s failure to incorporate infrastructural frameworks for proper vendor selection, data governance, and company training when developing and implementing AI technology in its business practices could leave it vulnerable to litigation, and not just by regulators. The breadth of injuries to customers identified by the FTC complaint provides a roadmap for plaintiffs looking to please cognizable claims against businesses using customer-facing technology which results in harms ranging from mild to severe. 

Key Takeaways

The Rite Aid complaint makes clear that adequate safeguards for the successful and safe use of AI technology should be deployed early and often by all types of businesses. The National Institute of Standards and Technology responded to the increasing demand for guidance on proper AI technology use with its Artificial Intelligence Risk Management Framework in January 2023, which addresses each of the key failures identified in the Rite Aid complaint. As regulators across industries join in the scramble to manage and oversee the rapidly evolving landscape of AI technology use, the magnitude of potential risks to businesses seeking to integrate AI into their business practices will continue to grow. The Rite Aid complaint serves as a cautionary tale to all businesses that the AI space is no longer the Wild West, and there are plenty of new sheriffs in town. 

Stay tuned for part II of this article where we provide a roadmap to help companies, Map, Measure, Manage, and Govern their AI risks and usage. 

Subscribe To Viewpoints


Meredith M. Leary is a Mintz litigator with extensive project management and case management experience in the life sciences, software, and manufacturing industries. Meredith's practice focuses on risk assessment and mitigation in the litigation and arbitration contexts.
Sav focuses their practice on complex civil litigation and appellate matters. They have experience conducting legal research and preparing legal memoranda and briefs.