As Employer Use of Artificial Intelligence Continues to Expand, the EEOC Issues Guidance on Disparate Impact Discrimination
On May 18, 2023, the United States Equal Employment Opportunity Commission (EEOC) issued new guidance related to employer use of software algorithms and artificial intelligence in employment selection procedures. Unlike other recent EEOC guidance related to artificial intelligence, the recently published guidance focuses heavily on the potential for disparate impact discrimination with employer use of artificial intelligence, which foreshadows how the EEOC will scrutinize employer use of algorithmic decision-making tools.
Notably, the EEOC highlighted that the guidance was not “new policy” and that the information did not carry the force or effect of law. Rather, the guidance was informational and designed to help employers as they continue to utilize artificial intelligence in employment decisions.
As the use of artificial intelligence continues to evolve, employers should take note of this new guidance and the potential legal issues associated with the use of artificial intelligence in decision-making processes. Below is a synopsis of the EEOC’s most recent guidance.
1. Selection Procedures, Selection Rates and the “Four-Fifths” Rule
Title VII generally prohibits discrimination in employment on the basis of race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), or national origin. Title VII’s prohibitions also extend to discrimination resulting from facially neutral selection procedures that disproportionately impact applicants or employees based on one of the above-mentioned protected categories.
The EEOC identified several types of software that incorporate algorithmic decision making. Those include:
- Resume scanners that prioritize applications using certain keywords;
- Employee monitoring software that rates employees on their keystrokes or other factors;
- Virtual assistants or chatbots that ask job candidates about their qualifications and reject those who do not meet pre-defined standards;
- Video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and
- Testing software that provides job fit scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit.”
So, what is a selection procedure as it relates to artificial intelligence, and how can an employer determine if its procedure is creating a disparate impact? From the EEOC’s perspective, an employer’s selection procedure can encompass the use of algorithmic decision-making tools when they are used to make or inform decisions about whether to hire, promote, terminate, or take other similar actions toward applicants or current employees.
The EEOC states that employers can analyze whether their algorithmic decision-making tool creates a disparate impact by determining whether the procedure causes a selection rate for individuals in protected classes that is substantially less than the selection rate for employees in another group. To that point, the EEOC addressed employer use of the four-fifths rule, which is a “rule of thumb” utilized to determine whether the selection rate for one group is substantially greater than another group. More specifically, the four-fifths “rule of thumb” defines a selection rate for one group as “substantially” different than the selection rate of another group if their ratio is less than four-fifths (or 80 percent).
Although the EEOC refers to the “rule of thumb,” employers should remain mindful that the four-fifths rule does not guarantee that a selection procedure complies with Title VII. The EEOC’s guidance reminds employers that courts have held that the rule is not appropriate in all circumstances and that, should a selection procedure be challenged under Title VII, the EEOC may not consider compliance with the rule sufficient to show the selection procedure is lawful.
2. Employers May be Liable for Vendor Created Selection Procedures
Many employers utilize outside vendors to develop algorithmic decision-making tools. In doing so, employers should recognize that they cannot escape liability under Title VII simply because they did not develop the tool. The EEOC guidance provides that employers may be liable for utilizing a selection procedure that violates Title VII, even if the procedure was developed by an outside vendor. The guidance further explains that employers may be responsible for the actions of their agents, which may include entities such as software vendors, if employers have given the vendors authority to act on their behalf.
3. Employers Should Act Swiftly Upon Discovering That the Use of An Algorithmic Decision-Making Tool May Have A Disparate Impact
The EEOC’s guidance states that employers should conduct self-analyses on an ongoing basis to determine if their employment practices have a discriminatory impact on protected classes of employees. If an employer discovers that it is utilizing an algorithmic decision-making tool that may create an adverse impact, it should take steps to reduce the impact or select a different tool to avoid violating Title VII. Indeed, an employer’s failure to adopt a less discriminatory algorithm that was considered during the development process can give rise to liability.
Practical Steps for Employers
1. Examine Vendor Contracts
It is common for employers to utilize third party vendors to develop and/or administer algorithmic decision-making software. If a vendor is utilized, employers should thoroughly review the vendor contract to better understand potential liabilities related to the use of the software.
For example, employers should pay specific attention to any indemnification provisions in the vendor contract. The indemnification provisions should, at a minimum, address any claims or liability that an employer may face if the software is found to have an adverse impact on the basis of a characteristic protected under Title VII.
Employers should also review any representations and warranties related to the development of the software to determine if the vendor has conducted testing (
e.g., four-fifths or statistical significance testing) to confirm (to the extent possible) that the software will not create a disparate impact.
2. Make Final Decisions Based on Human Intelligence, Not Artificial Intelligence
Even with the use of algorithmic decision-making tools or other artificial intelligence, employers should always conduct an independent analysis prior to making important employment-related decisions. Employers, if they choose to do so, should use artificial intelligence as a resource and not as a fool-proof decision-making tool. In a busy world, artificial intelligence can certainly help employers be more efficient as they search for talent and prepare to make important workplace decisions; however, a warm-blooded human should ultimately review the results from an algorithmic decision-making tool and make final employment-related decisions.
3. Stay Up to Date
Over the past few years, the EEOC has shown that it is prioritizing the discriminatory impact that may result from employer use of artificial intelligence. In 2021, EEOC Chair Charlotte Burrows launched the Artificial Intelligence and Algorithmic Fairness Initiative, which was aimed at ensuring that the use of artificial intelligence in employment decisions complies with Title VII. In May 2022, the EEOC published guidance discussing how the requirements under the ADA may apply to the use of artificial intelligence. In April of this year, the EEOC, the Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, and the Federal Trade Commission issued a joint statement emphasizing that the use of artificial intelligence must be consistent with federal laws.
The information above reveals that the EEOC (and other federal agencies) will continue to scrutinize how employers use artificial intelligence to make hiring, firing, promotion, and other employment decisions. Thus, employers should monitor guidance from the EEOC and be proactive in adopting policies and procedures that reduce the likelihood that the use of artificial intelligence is causing an adverse impact in the workplace.
For more information, contact
Charles E. Bush II or any member of
Ice Miller's Workplace Solutions Group.
This publication is intended for general information purposes only and does not and is not intended to constitute legal advice. The reader should consult with legal counsel to determine how laws or decisions discussed herein apply to the reader's specific circumstances.