What are the legal restrictions governing how employers may use artificial intelligence in the workplace?
Businesses have long embraced the use of computer technology in the workplace as a means of improving efficiency and productivity of their operations. In recent years, businesses have incorporated artificial intelligence and other automated and algorithmic technologies into their computer systems. We will refer to these technologies as “AI Systems.” Recent reports indicate that 99 percent of Fortune 500 companies and 70 percent of overall employers use some form of artificial intelligence to screen or rank candidates for hire.[1] For example, businesses may use video interview software to assess tone of voice, body language, speech patterns, and gestures of job candidates. Chatbots can ask job candidates questions, with pre-programmed follow-up questions which vary with the candidate’s responses. AI Systems can track work time for office workers using keystroke monitoring, eye movements or internet browsing history. Businesses may use AI Systems for scheduling and task assignment, or to track workers’ geographic locations. The number of current and future applications are limitless.
Federal and state legislatures have not kept pace with the changes in technology. However, some federal government agencies previously identified the use of AI Systems as raising concerns regarding employers’ compliance with the law. Two states and one municipality have stepped into the breach and enacted laws governing the use of AI Systems in the workplace. In this article, we provide an overview of the federal regulatory guidance and the state and local rules in place so far. We then make several suggestions regarding how employers may wish to address these developments with policies and practices to reduce legal risk.
Federal Guidance
In recent years, the Equal Employment Opportunity Commission (EEOC) and Department of Labor (DOL) each released guidance pertaining to the use of AI Systems in the workplace. The EEOC’s guidance addressed adverse impact in selection procedures under Title VII of the Civil Rights Act of 1964 (Title VII) and assessments of job applicants and employees under the Americans with Disabilities Act (ADA). The DOL issued its own guidance in October 2024 entitled, “Artificial Intelligence and Worker Well-Being: Principles and Best Practices for Developers and Employers” (Principles and Best Practices).
Very recently, President Trump’s administration has taken steps to deregulate the development and use of AI Systems at the federal level, including by retracting the EEOC’s and DOL’s guidance, revoking former President Biden’s executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” and retracting the Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights.”[2] Despite being retracted, the EEOC’s and DOL’s guidance may still offer helpful information for employers to consider in their efforts to ensure their use of AI Systems complies with the law.
EEOC Guidance
Under the EEOC’s Title VII guidance (issued in May 2023), the EEOC’s “Uniform Guidelines on Employee Selection Procedures” issued in 1978 (the Guidelines) apply to the use of “algorithmic decision-making tools” for a “selection procedure,” which is “any measure, combination of measures, or procedure if it is used as a basis for an employment decision.” The EEOC’s guidance makes clear that employers may use the calculations established in the Guidelines (which compare whether the selection rate for individuals in a protected group are “substantially different” than another group) as a “rule of thumb” to assess whether an algorithmic-decision making tool has an adverse impact on the basis of race, color, religion, sex, or national origin.
The EEOC’s ADA guidance (issued in May 2022) warns employers that the use of algorithmic decision-making tools could violate the ADA by: (1) failing to provide applicants and employees with reasonable accommodations as necessary to be fairly and accurately assessed by the algorithm, (2) relying on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability who could perform the essential functions of a job with a reasonable accommodation, and (3) adopting a tool that poses “disability-related inquiries” or seeks information from an applicant that qualifies as a “medical examination” before extending a conditional offer of employment.
DOL Principles and Best Practices
The DOL’s Principles and Best Practices provided recommendations for developing, using, and assessing AI Systems in the workplace, including that employers:
- allow workers “genuine input in the design, development, testing, training, use, and oversight of AI systems”;
- establish “clear governance systems, procedures, human oversight, and evaluation processes for AI Systems for use in the workplace”;
- disclose the use of AI Systems to workers and job candidates; and
- ensure their use of AI Systems does not “violate or undermine workers’ right to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections.”
State and Local Laws
In light of the federal shift towards deregulation of AI Systems, employers should anticipate that more states and localities will fill the void by adopting their own legislation and regulations covering the use of AI Systems in the workplace, as three have already done (New York City, Colorado, and Illinois). A common thread in current state and local legislation of the use of AI Systems in employment is the need for employers to provide employees and applicants with notices or disclosures about the use of AI Systems and, in some cases, perform and publish assessments or audits of the AI Systems for discriminatory impact.
NYC Local Law 144
First among laws specifically regulating the use of AI Systems in the workplace was New York City’s Local Law 144 (effective January 1, 2023). The ordinance provides that it is unlawful for employers and employment agencies to use an “automated employment decision tool” (AEDT) to “screen a candidate or employee for an employment decision” within the city—unless the tool has been subjected to a “bias audit” within one year before use and information about the bias audit and tool are published on the employer’s or employment agency’s website prior to use. N.Y.C. Admin. Code § 28-871(a). In addition, employers and employment agencies must also provide prior notice to employees and candidates that an AEDT will be used in connection with the employment decision, the job qualifications or characteristics that the AEDT will use, and other information. Id. § 28-871(b).
Colorado Anti-Discrimination in AI Law
In 2024, Colorado became the first state to enact legislation comprehensively addressing “algorithmic discrimination” against consumers (including employees) residing in the state. Colorado’s Anti-Discrimination in AI statute (CADAI) takes effect on February 1, 2026.
Among other things, the CADAI requires a “deployer” of a “high-risk artificial intelligence system” to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination,” which consists of “unlawful differential treatment or impact” based on a consumer’s protected characteristics caused by the use of an AI system. C.R.S. §§ 6-1-1701(1), -1703(1). “High-risk artificial intelligence systems” are ones that make, or are a substantial factor in making, “consequential decisions.” Id. § 6-1-1701(9)(a). The term “consequential decision” is defined broadly and includes a decision that has a material effect on a consumer’s “employment or an employment opportunity.” Id. § 6-1-1701(3).
In addition to establishing a standard of care, the CADAI generally requires deployers of high-risk artificial intelligence systems to complete “impact assessments” at least annually and within 90 days of any “intentional and substantial” modification of the system. § 6-1-1703(3). Impact assessments must include certain disclosures, including:
- a statement of the “purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system”;
- an analysis of whether the system “poses any known or reasonably foreseeable risks of algorithmic discrimination” and the steps taken to mitigate those risks;
- a description of the categories of data inputs for the system and the system’s outputs; and
- information regarding the deployer’s evaluation and monitoring of the system.
Id. Further, the CADAI imposes additional requirements on deployers of high-risk artificial intelligence systems, including to prepare “a risk management policy and program” governing the use of the system, publish information about the high-risk artificial intelligence systems used by the deployer, and provide disclosures to consumers when a high-risk artificial intelligence system is used to make or be a substantial factor in making a consequential decision concerning the consumer. Id. § 6-1-1703(2), (4), and (5).
Illinois Human Rights Act Amendment
Several months after Colorado passed the CADAI, Illinois enacted its own legislation amending the Illinois Human Rights Act (IHRA) in August 2024, which takes effect on January 1, 2026. Under the amended IHRA, it is a civil rights violation for an employer to “use artificial intelligence that has the effect of subjecting employees to discrimination on the basis of protected classes” or “use zip codes as a proxy for protected classes” with respect to “recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.” 775 ILCS 5/2-102(L). It is also a civil rights violation for an employer to “fail to provide notice to an employee that the employer is using artificial intelligence” for the purposes described in the previous provision. Id.
Unlike Local Law 144 and the CADAI, the amended IHRA does not require employers to conduct bias audits or impact assessments for AI Systems used in making employment decisions or establish governance procedures for the use of AI Systems.
Practice Suggestions
Moving forward, employers should expect an increasingly patchwork set of state and local laws specifically covering the use of AI Systems in employment, in addition to the existing federal, state, and local employment laws that still apply to employers’ use of AI Systems.
While AI-specific employment laws impose different requirements, employers would be prudent to consider implementing policies and practices that address the common legal requirements applicable to the use of AI Systems in employment, including:
- establishing governance structures that ensure human oversight of AI Systems and significant employment decisions;
- assessing the organization’s use of AI Systems (including by identifying the systems in use and sources of data, evaluating AI vendors, and performing regular audits or assessments to evaluate the systems for disparate treatment or impact); and
- providing notice and training to workers on the use and purpose of AI Systems (including notice that employees and applicants may request reasonable accommodations for disabilities).
Of course, employers should keep a close watch on legislative and regulatory developments affecting their use of AI Systems in the workplace.
[1] January 31, 2023 Testimony to EEOC of ReNika Moore, Director of the American Civil Liberty Union’s Racial Justice Program, available at https://www.eeoc.gov/meetings/meeting-january-31-2023-navigating-employment-discrimination-ai-and-automated-systems-new/moore#_ftnref79 (last visited January 28, 2025).
[2] https://www.whitehouse.gov/presidential-actions/2025/01/initial-rescissions-of-harmful-executive-orders-and-actions/ (last visited January 28, 2025).
Reprinted with permission from the February 5, 2025 edition of the NEW YORK LAW JOURNAL © 2024 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.