Skip to main content

Deepfakes Face Deep Trouble: Revenge Porn in the Workplace

The recently enacted TAKE IT DOWN Act makes it a federal offense to share online nonconsensual and explicit images, regardless of whether the images are real or computer generated. The law is intended to protect victims from online abuse, set clear guidelines for social media users, and deter “revenge porn” by targeting the distribution of real and digitally altered exploitative content involving children. While the 2019 Shield Act criminalized the sharing of intimate images with the intent to harm, the TAKE IT DOWN Act provides additional protections by establishing a removal system that allows victims to request removal of harmful and intrusive images, and mandates tech platforms to remove such images within 48 hours of receiving a takedown request from an identifiable person or authority acting on behalf of such individual. The law aims to respond to the recent era of artificial intelligence (“AI”) “deepfakes” or realistic, digitally generated or altered videos of a person, often created and distributed with malicious intent. Additionally, the law attempts to combat the rise of “nudification technology” used to create highly realistic and sexually explicit images and videos, by removing the clothes from original images of clothed people. Users of this technology can disseminate the images rapidly, broadly, and anonymously.  

States have also enacted measures to combat revenge porn. As of 2025, nearly all 50 states have enacted laws criminalizing “revenge porn” and/or laws providing civil recourse for victims. For example, Massachusetts updated its Criminal Harassment Statute to prohibit distribution of nude or partially nude images of individuals, or of those engaged in sexual acts. The New York Civil Rights Act provides a private right of action against those who disseminated intimate images without consent. Washington and Illinois each provide civil remedies for deepfake content. Minnesota announced it is considering legislation that targets companies that run websites or apps that create, house, or disseminate explicit images or photos; and San Francisco filed a first-of-its-kind lawsuit that attempted to terminate an app used to create AI generated images of high school-aged girls across the globe. Legislative efforts in this area reflect the urgency to reign in the creation and distribution of harmful AI images. However, some of the wider sweeping efforts to regulate AI may collide with First Amendment rights. AI experts warn that lawmakers’ failure to narrowly tailor the legislation targeting AI that creates deepfakes, for example, will result in free speech legal challenges. Accordingly, lawmakers may strategically target conduct instead of speech, in support of arguing for a lower standard of judicial scrutiny when faced with a First Amendment challenge. 

What do these developments mean for employers? Most employers are ill-prepared to quickly and effectively respond to the fast-moving threat of AI deepfakes and nudification. Failure to respond appropriately, however, may result in liability for employers. For example, employers can be held liable under Title VII of the Civil Rights Act of 1964, if deepfakes and nudification are used to create a hostile work environment—regardless of whether the images were created outside of work hours. Accordingly, employers should review their insurance policies to confirm whether the coverage includes cyber-related matters, such as deepfakes. There are also several proactive measures employers can take to mitigate liability by guiding decision-makers’ and employees’ conduct. 

For example, employers should audit and update existing social media and harassment policies to include descriptions of deepfakes and nudification technology, and the threat they pose to the workplace. Incorporating reporting and investigation protocol concerning digital impersonation or AI-powered cyberbullying into harassment policies is another way to equip personnel to respond to the threat. Specifically, employers should create a detailed response plan with timelines, documentation requirements, and points of contacts when they receive inappropriate photos or videos of an employee that appears to be engaging in sexual or pornographic activity. Employers will want to take measures to avoid assumptions or hasty decisions that may result in punishing the victim of deepfake or nudification technology. Likewise, employers should create clear guidelines and a response plan in the event that an employee is alleged to have distributed revenge porn or malicious deepfakes. Training on all of the above will go a long way too.

Mintz continues to monitor this rapidly developing issue and will stand by to assist employers seeking further guidance.

Subscribe To Viewpoints

Authors

Geri Haight is a Mintz Member and former in-house counsel who focuses on employment litigation, counseling, and compliance, as well as intellectual property and trade secret matters.
Tara Dunn Jackson is a Mintz Associate who litigates employment disputes before state and federal courts and administrative agencies and counsels clients on a broad spectrum of employment issues. She has experience with cases involving defamation, Title IX claims, and employment laws, as well as complex commercial litigation. Tara represents clients in a broad range of industries, including the education sector.