EEOC Settles over Recruiting Software in Possible First Ever AI-related Case

September 21, 2023

Reading Time : 4 min

On September 8, 2023, federal court approved a consent decree from the Equal Employment Opportunity Commission (EEOC) with iTutorGroup Inc. and its affiliates (“iTutor”) over alleged age discrimination in hiring, stemming from automated systems in recruiting software. Arriving on the heels of the EEOC announcing its artificial intelligence (AI) guidance initiative, many are calling this case the agency’s first ever AI-based antidiscrimination settlement.1 While it is not clear what, if any, AI tools iTutor used for recruiting, one thing is certain: We will soon see many more lawsuits involving employers’ use of algorithms and automated systems, including AI, in recruitment and hiring.2

In the lawsuit, the EEOC claimed that the Shanghai China-based English language tutor provider used software programmed to automatically reject both female candidates over the age of 55 and male candidates over 60 for tutoring roles, in violation of the Age Discrimination in Employment Act (ADEA). The EEOC filed the case in May 2022 after iTutor failed to hire Charging Party Wendy Picus and over 200 applicants aged 55 and older, allegedly because of their age, according to the agency.3 The case is also notable because iTutor treats its tutors as independent contractors, not employees, and only employees are protected by the ADEA. Nonetheless, according to the consent decree filed on August 9, 2023 with the U.S. District Court for the Eastern District of New York, iTutor will pay $365,000 to over 200 job candidates who were automatically screened out by iTutor’s recruiting software to resolve the EEOC’s claims.4

In addition to monetary relief, iTutor must allow applicants who were rejected due to age to reapply and must report to the EEOC on which ones were considered, provide the outcome of each application and give a detailed explanation when an offer is not made.5

Antidiscrimination Training as Part of Settlement

The consent decree further includes a number of “injunctive relief” requirements imposed on iTutor if or when the company resumes hiring for the longer of five years, or three years, from the resumption date, including:

  • Prohibiting iTutor from requesting birth dates from applicants, or screening based on age, aside from confirmation applicants are over 18 for compliance with existing laws.6
  • Distributing a memo about federal antidiscrimination laws to all employees and independent contractors involved in the selection process, and posting or distributing the memo to all applicants as well.
  • Updating and distributing its antidiscrimination polices and complaint procedures applicable to screening, hiring and supervision.
  • Training supervisors, managers and other employees or contractors involved in the screening and selection process on the company’s obligations under federal antidiscrimination laws using an EEOC-approved third party.7
  • Fulfilling pre-training notification and post-training reporting requirements.
  • Monitoring and reporting, including on the day iTutor resumes considering applicants and every six months thereafter, providing written notice to the EEOC on any discrimination complaints from employees or applicants.8

Using AI in Hiring

Just because there is a lack of comprehensive AI law in the United States does not mean the AI space is unregulated. Agencies like the EEOC, Department of Justice (DOJ) and Federal Trade Commission (FTC) and others have released statements on their intent to tackle problems stemming from AI in their respective domains. After a delay, New York City’s new law governing AI in employment decisions took effect this July.

The proliferation of AI in recruiting and hiring means that many employers will find themselves on the frontlines of important compliance questions from the EEOC. With more legal actions and settlements on the way, employers will need a strategy for proper use of AI tools in candidate selection. While this case might not have involved AI decision making, both the EEOC and FTC have maintained that employers may be responsible for decisions made by their AI tools, including when they use third parties to deploy them. Employers need to understand the nature of the AI tools used in their hiring and recruiting process, including how the tools are programmed and applied by themselves and their vendors. Diligent self-audits, as well as audits of current and prospective vendors, can go a long way toward reducing the risk of AI bias and discrimination.

Please contact a member of Akin’s cybersecurity, privacy and data protection team or labor and employment team if you have any questions about how this case may impact your company or your company’s hiring and employment practices.


1 The case at issue appears to stem from a software that the EEOC claims was programmed for automated decision making, rather than generative or other AI. Nonetheless, the agency itself connects this case to AI in the press release, where EEOC Chair Charlotte A. Burrows refers to it as “an example of why the EEOC recently launched an Artificial Intelligence and Algorithmic Fairness Initiative.”

2 The EEOC has discussed AI together with automated systems generally, See Equal Employment Opportunity Comm’n, Press Release, EEOC Releases New Resource on Artificial Intelligence and Title VII, at https://www.eeoc.gov/newsroom/eeoc-releases-new-resource-artificial-intelligence-and-title-vii (May 18, 2023) (the agency’s technical assistance document on the application of Title VII of the Civil Rights Act to an employer’s use of automated systems, including those that incorporate AI). The EEOC defines automated systems broadly to include software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions. See EEOC Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, at https://www.eeoc.gov/joint-statement-enforcement-efforts-against-discrimination-and-bias-automated-systems (April 25, 2023).

3 Equal Employment Opportunity Comm’n v. iTutorGroup, Inc., No. 1:22-cv-02565-PKC-PK (ED NY, August 9, 2023).

4 Id. at 15.

5 Id. at 18.

6 Id. at 8.

7 Id. at 12.

8 Id. at 14.

Share This Insight

Previous Entries

Data Dive

November 19, 2024

The European Union’s AI Office published the inaugural General-Purpose AI Code of Practice on November 14, 2024. The Code is intended to assist providers of AI models in their preparations for compliance with the forthcoming EU AI Act, to be enforced from August 2, 2025. The Code is designed to be both forward-thinking and globally applicable, addressing the areas of transparency, risk evaluation, technical safeguards and governance. While adherence to the Code is not mandatory, it is anticipated to serve as a means of demonstrating compliance with the obligations under the EU AI Act. Following a consultation period that garnered approximately 430 responses, the AI Office will be empowered to apply these rules, with penalties for nonconformity potentially reaching 3% of worldwide turnover or €15 million. Three additional iterations of the Code are anticipated to be produced within the coming five months.

...

Read More

Data Dive

November 15, 2024

On October 29, 2024, the DOJ issued a proposed rule prohibiting and restricting certain transactions that could allow persons from countries of concern, such as China, access to bulk sensitive personal data of U.S. citizens or to U.S. government-related data (regardless of volume).

...

Read More

Data Dive

October 17, 2024

During the course of any lending transaction, lenders will conduct a due diligence review of the borrower, including reviewing any relevant “know-your-customer” information.

...

Read More

Data Dive

September 17, 2024

Following the publication of the European Union’s Artificial Intelligence Act (AI Act or Act) on 12 July 2024, there are now a series of steps that various EU bodies need to take towards implementation. One of the first key steps is in relation to the establishment of codes of practice to “contribute to the proper application” of the AI Act.

...

Read More

Data Dive

August 6, 2024

On July 30, 2024, the Senate passed the Kids Online Safety and Privacy Act (S. 2073) via an overwhelmingly bipartisan vote of 91-3 shortly before departing for the August recess.

...

Read More

Data Dive

July 18, 2024

On 12 July 2024, the European Union Artificial Intelligence Act (AI Act or Act) was published in the Official Journal of the European Union (EU), marking the final step in the AI Act’s legislative journey. Its publication triggers the timeline for the entry into force of the myriad obligations under the AI Act, along with the deadlines we set out below. The requirement to ensure a sufficient level of AI literacy of staff dealing with the operation and use of AI systems will, for example, apply to all providers and deployers on 2 February 2025.

...

Read More

Data Dive

July 18, 2024

On June 18, 2024, the United States Securities and Exchange Commission (SEC) announced a settlement with R.R. Donnelley & Sons Company (RRD) for alleged internal control and disclosure failures following a ransomware attack in 2021. Without admitting or denying the SEC’s findings, the business communications and marketing services provider agreed to pay a civil penalty of over $2.1 million to settle charges alleging violations of Section 13(b)(2)(B) of the Securities Exchange Act of 1934 (Exchange Act) and Exchange Act Rule 13a-15(a).1

...

Read More

Data Dive

June 11, 2024

In May, the National Institute of Standards and Technology (NIST) issued updated recommendations for security controls for controlled unclassified information (CUI) that is processed, stored or transmitted by nonfederal organizations using nonfederal systems, (NIST Special Publication 800-171 (SP 800-171), Revision 3). These security requirements are “intended for use by federal agencies in contractual vehicles or other agreements that are established between those agencies and nonfederal organizations.”1 While these new controls are only applicable to nonfederal entities that agree to comply with the new issuance, Revision 3 signals the next phase of expected security for government contractors.

...

Read More

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.