Skip to main content

EEOC Issues Guidance Addressing How the Use of Artificial Intelligence in Employment Decisions Could Violate the ADA

As employers continue to introduce a variety of software, algorithmic and artificial intelligence (“AI”) based tools into the workplace to assist and support employment-based decisions, there is an increasing concern that their use could result in discrimination against certain groups of individuals, including disabled applicants and employees.  On May 12, 2022, the United States Equal Employment Opportunity Commission (“EEOC”) issued technical guidance addressing how an employer’s use of software, algorithmic decision-making tools and AI to assist them in hiring workers, monitoring worker performance, determining pay or promotions, and establishing the terms and conditions of employment could violate the Americans with Disabilities Act (“ADA”).  It is worth noting that the Department of Justice also issued similar technical guidance for state and local government employers. 

The EEOC guidance is the most recent installment in the EEOC’s “Initiative on Artificial Intelligence and Algorithmic Fairness,” which it announced on October 28, 2021 and which examines how algorithmic decision-making tools and AI are fundamentally changing employment decisions.  Echoing concerns of emerging laws such as New York City’s recent passage of a law regulating the use of automated employment decision-making tools, the EEOC’s recent action provides critical guidance that employers should note when implementing algorithmic decision-making tools and AI in workplace decisions.

The Technology Subject to the EEOC’s Guidance

The EEOC’s guidance primarily concerns:

  • Software” – Defined broadly as information technology programs or procedures that provide instructions to a computer on how to perform a given task or function.  In the employment context, “software” can include automatic resume-screening software, hiring software, chatbots for hiring and workflow, video interviewing, and employee monitoring and worker management software.
  • "Algorithms” – Defined as a “set of instructions that can be followed by a computer to accomplish some end,” the EEOC primarily uses this term to refer to the algorithms built into human resources software or applications to allow employers to process data to evaluate, rate, and make other decisions about job applicants and employees at various stages of employment, including hiring, performance evaluation, promotion, and termination.
  • "Artificial Intelligence” – Defined in the National Artificial Intelligence Initiative Act of 2020 as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” the EEOC acknowledges that some employers and software vendors use AI when developing algorithms that help employers evaluate, rate, and make other decisions about job applicants and employees and using AI has typically meant that the developer relies partly on the computer’s own analysis of data to determine which criteria to use when making employment decisions.

The EEOC notes that employers may rely on different types of software that incorporate algorithmic decision-making at a number of stages of the employment process, and each of these types of software may include AI, including:

  • resume scanners that prioritize applications using certain keywords;
  • employee monitoring software that rates employees on the basis of their keystrokes or other factors;
  • “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
  • video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and
  • testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.

Potential Ways ADA Violations Can Arise

The EEOC identifies three common ways that an employer’s use of these AI-based tools could violate the ADA:

  • The employer fails to provide a reasonable accommodation necessary for the employee or applicant to be rated fairly evaluated by the algorithm;
    • To ensure that employees or applicants are aware of their right to request a reasonable accommodation when their disability prevents them from being accurately assessed by a given tool or where their disability could lead to a lower score than a non-disabled individual, the EEOC provides that an employer may inform applicants or employees what steps an evaluation process includes and ask whether they will need a reasonable accommodation to complete it.  For example, if a hiring process involves a video interview, the EEOC suggests that employers inform applicants of this step and provide a way to request a reasonable accommodation. 
  • The employer relies on an algorithm without appropriate safeguards that either intentionally or unintentionally “screens out” job applicants with a disability, even though they could perform the essential functions of the job with or without a reasonable accommodation;
    • To avoid an impermissible screen-out, the EEOC suggests that employers relying on algorithmic decision-making tools developed by a third-party vendor ascertain whether individuals with disabilities were taken into account when developing the tool, which could include asking: (1) if the vendor made the tool accessible to as many individuals with disabilities as possible; (2) if the tool provides alternative formats; and (3) whether the vendor made any attempt to determine whether the tool disadvantages persons with disabilities.  An unlawful screen-out can occur, for example, where a “chatbot,”—software designed to communicate with applicants online and through texts and emails—excludes all job applicants with significant gaps in their employment history.  This could unlawfully exclude applicants who had an employment gap due to a disability, as well as for other reasons that could implicate the anti-discrimination laws.
    • To avoid screen-outs, employers can take steps in developing their own tools, as well as steps to clearly inform applicants and employees of the availability of reasonable accommodations and alternative assessments, and inform employees or applicants in advance of the traits and characteristics the given tool is designed to measure.
  • The employer’s algorithm decision-making tool violates the ADA’s restrictions on disability related inquiries and medical examinations.
    • Algorithmic decision-making tools could also inadvertently elicit information about an applicant’s disability or medical history—an unlawful practice under the ADA prior to making a conditional offer.  For example, a chatbot should not be programmed to ask questions to identify an applicant’s medical conditions prior to a conditional offer of employment.
    • While not all requests for health-related information may violate the ADA’s restrictions on disability-related inquiries and medical examinations, it still might violate other parts of the ADA (e.g., a personality test asking about optimism potentially screening out an individual with Major Depressive Disorder who answers questions negatively).

The EEOC makes clear that employers will likely be responsible under the ADA for the AI-based tools they implement, even if outside vendors developed the tools.  Employers should inquire with vendors before purchasing AI-based tools to understand whether individuals with disabilities, and the potential for disability bias (as well as other forms of bias), were taken into account during the design process.  The EEOC further cautions that employers should not merely accept labels advertised by vendors at face value.  Indeed, while a product may be labelled “bias-free”, it might not take into account all possible forms of bias; similarly, advertising a tool as “validated” only means that it accurately measures a certain trait or characteristic important for a job, not that it does so free of bias.  

EEOC’s “Promising Practices” for Employers

The EEOC identified a number of safeguards—referred to as “promising practices”—that employers should bear in mind when considering or implementing the use of AI-based tools for employment decisions.  These “promising practices” include:

  • Training staff to recognize and process reasonable accommodation requests efficiently;
  • Developing alternative means of assessment that may be used if a reasonable accommodation is requested;
  • Ensuring that third-party vendors forward all accommodation requests to the employer’s attention or enter into an agreement with vendors to provide reasonable accommodations on the employer’s behalf;
  • Using tools that have been designed with individuals with a broad range of disabilities in mind;
  • Informing applicants and employees that reasonable accommodations are available and provide clear instructions for such requests;
  • Clearly and succinctly describing, in accessible formats, the traits and characteristics the given algorithm is designed to assess, as well as any variables that may affect the rating;
  • Ensuring that the algorithmic decision-making tool measures abilities or qualifications that are truly essential functions;
  • Ensuring that essential functions are measured directly as opposed to by way of correlative characteristic scores; and
  • Inquiring with vendors before purchasing an algorithmic decision-making tool to confirm that the tool does not ask job applicants or employees questions that are likely to elicit information about a disability or seek information about an individual’s physical or mental impairments or health, unless such inquiries are related to a request for reasonable accommodation.

States and Localities Have Already Begun Regulating These Tools

The EEOC’s guidance and October 2021 initiative are part of a larger trend nationwide to regulate AI-based tools.  New York City, Illinois, and Maryland have all enacted laws regulating these tools to some extent.  New York City’s law, effective January 1, 2023, is far-reaching, affects most conceivable uses of AI by employers, and actually codifies many of the suggestions made by the EEOC. 

For example, the New York City law requires that employers conduct “bias audits,” notify employees that AI-based tools will be used in connection with a given assessment, inform them of the specific characteristics the tool is designed to assess, and allow candidates to request alternative processes or accommodations.  Illinois enacted the Artificial Intelligence Video Interview Act (“AIVIA”) effective January 1, 2020, which requires employers using AI to analyze video interviews for positions “based in Illinois” to notify applicants that AI will be used in the evaluation process, inform them of the characteristics it will assess, and obtain consent.  Effective October 1, 2020, Maryland passed HB1202, similarly aimed at the hiring process.  Specifically, Maryland’s HB1202 prohibits employers from using a facial recognition service for the purpose of creating a facial template during an applicant’s interview for employment unless an applicant consents.

Takeaways for Employers

Employer use of AI-based tools in the workplace can assist in streamlining hiring and evaluation processes, and enable employers to focus more energy and resources on their core businesses and objectives.  Nevertheless, employer diligence is paramount as this evolving technology carries with it responsibilities that employers must be cognizant of going forward.  Employers using AI-based tools in the workplace should:

  • Follow the  “promising practices” listed by the EEOC (and discussed herein);
  • Perform diligence on AI-based tools, including the claims by the vendors that sell or develop these tools, and avoid relying on assurances without appropriate verification that these tools are free of “bias” or will not violate the ADA; and
  • Monitor future guidance from the EEOC and other government agencies, including state and local entities such as the New York City Commission on Human Rights. 

It is incumbent on employers to get ahead of the curve and ensure that they are designing, selecting and implementing these tools in a manner consistent with their obligations under the law, including the ADA and other workplace anti-discrimination laws.  The Mintz Employment, Labor & Benefits team stands ready to assist employers in ensuring compliance with the ever-evolving workplace laws.

Subscribe To Viewpoints

Authors

Michelle is an accomplished employee benefits and executive compensation lawyer with more than 25 years of experience advising clients on ERISA, benefits, and executive compensation matters, including in connection with corporate transactions.
Evan M. Piercey is an Associate at Mintz who litigates employment disputes before state and federal courts and administrative agencies. He also advises clients on a range of issues, including employment agreements and compliance with employment laws.