Skip to main content

AI in the Workplace: Issue Spotting for Employers

Artificial intelligence is no longer a future consideration for employers – it is already reshaping how companies hire, manage, and engage their workforces and how their workforce performs their job duties. From AI-powered resume screening tools to automated note-taking applications and generative AI platforms embedded in everyday workflows, AI has become a fixture in the modern workplace. But with rapid adoption comes a host of employment law considerations that employers cannot afford to overlook. 

Below, we identify key areas on which employers should focus their attention to ensure compliance. 

We will also cover the intersection between AI and employment law at our upcoming Mintz Employment Law Summits in New York (April 30), Boston (May 7), and San Diego (June 2). The Mintz Employment Law Summits serve as a forum designed to address the most pressing challenges confronting employers today, with programming centered on the key issues shaping the workplace now and in the year ahead.

AI in the Hiring Process and Other Employment-Based Decisions 

One of the most common and legally significant uses of AI in the workplace is in recruitment and hiring, where employers increasingly rely on AI tools to sort resumes, rank candidates, and conduct interviews. A growing number of states and localities – including New York City, Illinois, California, and Colorado – have enacted or proposed laws specifically addressing AI in employment decisions. At the same time, existing federal, state, and local anti-discrimination statutes – including Title VII, the ADEA, and the ADA – apply with full force to AI-assisted decision-making, and AI tools trained on historical data can perpetuate biases related to race, sex, age, disability, and other protected characteristics. This creates particular vulnerability under disparate impact theories, especially where an employer cannot explain how an opaque algorithmic process arrived at its decision. Recent litigation underscores these risks. In Mobley v. Workday, No. 23-cv-00770 (N.D. Cal.), for example, the court ruled that employers can be held liable for AI-based screening tools that allegedly discriminate and permitted disparate impact claims to proceed; the court later certified a collective action under the ADEA. 

And the legal risks extend beyond anti-discrimination compliance. In Kistler et al. v. Eightfold AI Inc. (filed Jan. 26, 2026), a class of plaintiffs alleged that an AI hiring platform collected inaccurate information and scored applicants without proper disclosures under the Fair Credit Reporting Act, framing AI hiring tools as a consumer protection issue rather than one of discrimination.

These cases raise important questions about who bears legal responsibility for automated decision-making, when employers are introducing a “human in the loop," and how they audit these processes. Importantly, these questions extend beyond hiring to any employment-based decision – including performance management and termination – where an employer uses an AI system or tool.

Guarding Employee Privacy When Using AI

Beyond discrimination risk, the prevalent use of AI in the workplace, particularly AI-powered recording and note-taking tools, poses additional risks related to workplace surveillance, data security, and consent. For example, using a recording tool without consent can violate federal and state wiretapping statutes that carry hefty penalties, particularly those with “all party” consent rules, such as Massachusetts and California. Some AI recording tools may silently join meetings, creating non-compliance risk not only under wiretapping statutes but also under state biometric information privacy laws – such as the Illinois Biometric Information Privacy Act – if a recording captures employee biometric data like voiceprints or facial images. Employers should proactively establish clear policies and secure consent mechanisms, and consult with counsel on the implications of using AI recording tools before deploying them broadly.

Revising Employment Documents to Account for AI

As AI becomes embedded in daily workflows, employers should review and update key employment documents to close gaps that traditional agreements were not designed to address. For example:

  • Offer Letters and Employment Agreements. As AI tools become more sophisticated, it is increasingly easy for candidates to use AI to emulate skill sets, fabricate experience, or misrepresent qualifications during the hiring process. To mitigate this risk, employers should consider incorporating certification language into offer letters and employment agreements requiring the employee to affirmatively represent and warrant that the skills, qualifications, training, and professional experience they presented during the application and interview process are true, accurate, and complete, and that any material misrepresentation constitutes grounds for rescission of the offer or termination of employment. This type of provision gives employers a clear basis for taking action if it is later discovered that a new hire used AI to artificially inflate their credentials or simulate competencies they do not actually possess.
  • Job Descriptions. Employers should proactively update job descriptions to reflect how AI is being integrated into specific roles. Rather than listing vague requirements such as “AI proficiency,” job descriptions should identify the specific AI tools, platforms, or competencies expected for the role. Accurate job descriptions are critical for defending ADA claims and in some instances, Title VII and ADEA claims as well.
  • Restrictive Covenant Agreements. Standard restrictive covenant agreements were not drafted with generative AI in mind, and provisions that once adequately protected confidential information and trade secrets may now leave significant gaps. When confidential information is input into an open-source AI platform, it may become integrated in a way that makes it impossible to segregate, remove, or return an employer’s data. Under various state and federal trade secret laws, uploading a trade secret into an open-source AI platform may be argued to destroy secrecy, demonstrate a failure to implement reasonable protective measures, or waive protections that underpin valuation. Employers should consider incorporating AI-specific provisions into restrictive covenant agreements, including those which explicitly prohibit employees from inputting confidential information into any unapproved AI tool and from using confidential information to train, fine-tune, or improve any AI model or system.
  • AI Policy and Handbook Updates. Every employer should have a clear, comprehensive AI use policy in place. An effective policy should, at a minimum, address authorized AI tools and data protection human-in-the-loop requirements, acceptable and prohibited uses, and protocols for AI-assisted note-taking, if applicable. The policy should make clear that violations will result in disciplinary action. Employers should also review their employee handbooks to determine what, if any, other policies may require updates in light of AI use in the workplace; for example, anti-harassment or anti-discrimination policies, codes of conduct, information and security policies, and confidentiality policies.

Employee Training 

Employee training is a critical and often overlooked component of responsible AI adoption. The Department of Labor’s recently issued AI Literacy Framework signals that regulators view AI literacy as a baseline workforce expectation. A number of practical and legal questions arise in this space. What level of AI literacy is appropriate for each role, and how should employers document and track compliance? How should training programs address the risk of employees inputting proprietary data, trade secrets, client information, or personally identifiable information into unauthorized AI platforms, and what are the consequences if they do? 

As AI capabilities evolve rapidly, one-time training is unlikely to be sufficient, and employers should consider how their programs will keep pace. Training should set expectations on permissible and impermissible uses of AI, provide use case boundaries (i.e., distinguishing between where AI may and may not be used in workflows), human-in-the-loop requirements, hallucinations and fabrications, and guardrails on protecting confidential data when using AI. Getting these questions right matters not only for operational efficiency, but a well-documented AI training program can serve multiple protective functions for employers, including establishing that an employer took reasonable steps to prevent and correct harms that could result from AI use.

AI and Employment Litigation 

In a recent blog post, we discussed a decision with far-reaching implications for employers in litigation or investigations. In U.S. v. Heppner, the court held that electronic documents a defendant created using the consumer version of the generative AI tool Claude were not protected by the attorney-client privilege or work product doctrine. The court reasoned that there is no attorney-client privilege where counsel neither directs nor suggests a client interact with generative AI to seek legal advice as well as where the terms of service of the tool made clear there could be no expectation of confidentiality of the user inputs. Because Heppner independently chose to use Claude, no privilege attached to the information he shared with the tool, even though he incorporated information from his attorneys into his prompts and intended to share Claude’s output with his counsel.

While Heppner is an early decision and the case law in this area will undoubtedly continue to develop, it nonetheless serves as an important reminder not only for litigation conduct, but also for employers conducting internal investigations and relying on AI to assist with investigation-related tasks. Employers should consult with counsel before using AI tools in any context that may implicate privilege or work product protections.

Looking Ahead

AI adoption in the workplace is accelerating, and the legal landscape is evolving just as quickly. Employers that take a proactive, cross-functional approach – aligning their employment practices, policies, and training programs with emerging legal requirements – will be best positioned to leverage AI’s benefits while managing risk. Mintz’s employment group is actively tracking developments across the country in AI-related legislation, regulatory guidance, and litigation, and regularly counsel employers on these issues. Join us at Mintz’s upcoming Employment Law Summits where we will further dissect these topics and others.

Subscribe To Viewpoints

Authors

Andrew is a seasoned transactional attorney who advises public and private companies, as well as C-Suite and business executives, on a broad range of sophisticated compensation matters.
Emma counsels clients on a wide variety of employment issues and litigates employment disputes before state and federal courts and administrative agencies. Her litigation practice includes restrictive covenant agreements; discrimination, sexual harassment, and retaliation claims; and wage and hour compliance.