New NYC Law on Preventing Bias in Automated Employment Assessments

Jan 21, 2022

Reading Time : 4 min

Employers in New York City using artificial intelligence (AI), data analytics or statistical modeling in the hiring or promotion process will need to notify candidates in advance and conduct an annual “bias audit.”

Passed on November 10, 2021, this new law is one of the most significant measures yet seen to address concerns from civil rights groups that machine learning may result in discrimination against women and minorities. The law comes into effect January 1, 2023, with fines of $500 for first-time violations and up to $1,500 for subsequent violations.

Broad Scope

Although one might expect the new law to specifically target algorithmic decision making, the language seems to cover a far wider range of employment tests. The law applies to “automated employment decision tools” defined as “any computational process, derived from machine learning, statistical modeling, data analytics or artificial intelligence” that generates a “simplified output, including a score, classification, or recommendation,” and substantially assists or replaces discretionary employment decisions.1

Even commonplace online employment assessments, predating AI technology, could be swept in by the broad definition of “automated employment decision tools.” For example, under Title VII of the Civil Rights Act, an employment test must be job related and consistent with business necessity if it has a disparate impact on members of a protected group. Job relatedness typically is established through a validation study, and most validation studies rely upon some form of “statistical modeling” to demonstrate a correlation between the assessment and the knowledge, skills, abilities and behavioral characteristics required to successfully perform the job. The same is true to justify the method of scoring, weighting and otherwise using an assessment in the selection process. As such, the vast majority of properly validated employment tests use a “computational process” that was “derived from” either “statistical modeling” or “data analytics” with a “simplified output,” such as a final score or a pass/fail flag. Likewise, all objective scored tests can be described as replacing “discretionary decision making.” Finally, while the law includes some exceptions, the exceptions do not materially impact employment decisions, such as “a junk mail filter, firewall, antiviral software, calculator, spreadsheet, databases, data set or other compilation of data.”2

Notification Requirement

New York City employers and employment agencies that use “automated employment decision tools” will have to meet strict notice requirements. Specifically, all candidates who reside in the City and who will be screened by such tools must receive notice, at least 10 business days in advance, (i) that an automated employment decision tool will be involved in assessing their candidacy; (ii) the job qualifications and characteristics the tool will be assessing; and (iii) that the candidate may request an unspecified alternative selection procedure or accommodation.3

The notice requirements will create challenges for employers using many of the AI sourcing and screening tools on the market today. In most cases, the vendors who sell these tools claim to be assessing candidates on job related factors, yet refuse to provide any specifics because their algorithms are proprietary. In fact, the vendors themselves may not know the characteristics and qualifications being screened because certain algorithms continually change, or become “smarter,” based on incorporating successful recruiting or hiring outcomes into the algorithm to prefer candidates who share some commonality with those selected. 

Annual Bias Audits

The new law also requires a “bias audit” at least annually, defined as an “impartial evaluation” conducted by an “independent auditor,” that includes, at a minimum, an analysis of whether the automated employment decision tool has resulted in a disparate impact based on gender, race or national origin.4 The law does not specify who qualifies as an “independent auditor” but presumably it would not include an in-house expert or the vendor who created the assessment. Potentially most problematic for employers, the “bias audit” must be published on the employer’s website, with “the distribution date of the tool to which such audit applies” before the employer may use the tool, meaning employers will need to launch the assessment, either with real candidates or incumbents, for development purposes only in order to gather the necessary data to test for disparate impact and, hopefully, satisfy the bias audit requirement.

Takeaway

The New York City law is the latest and greatest effort by regulators to curtail bias when AI is being used to make employment decisions. Earlier in 2021 the Equal Employment Opportunity Commission (EEOC) launched an initiative to study AI tools used in hiring decisions, highlighting the concern over bias and discrimination. Illinois passed its own AI employment law, which gives job applicants the right to know if AI is being used in a video interview and the option to have the video data deleted, while Maryland passed a law requiring job applicant consent for the use of facial recognition technology. Washington, D.C. likewise announced proposed legislation that would regulate algorithmic decision making, complete with annual audits similar to those of the New York City law.  

The broad scope of this law leaves many open questions, such as whether long-standing computer-based assessments that were derived from traditional testing validation strategies are covered by the law, or whether passive evaluation tools, such as recommendation engines used by employment firms, could fall within the scope of the law.   

In the absence of regulatory guidance, employers who wish to screen New York City residents for employment or promotion using computer-based assessments will need to take the necessary steps before January 2023 to ensure compliance. The fines, $500 for first-time violations and $1,500 for repeat offenses, are counted as separate violations each day that the violating automated employment decision tool is used.5 And, while the law does not include a private right of action, it also does not prevent a candidate from bringing a private action under other federal, state or local laws, such as the traditional antidiscrimination laws.6

Please contact a member of Akin Gump’s labor team or cybersecurity, privacy and data protection team if you have any questions about this new law or how these requirements will affect your company.


1 Id. at 1.

2 Id. 1.

3 Id. at 2.

4 “protected individuals” are those persons required to be reported by employers under 42 U.S.C. §2000e-8(c), as specified in 29 CFR §1602.7.

5 Id. at 3.

6 Id. at 3-4.

Share This Insight

Previous Entries

Data Dive

November 19, 2024

The European Union’s AI Office published the inaugural General-Purpose AI Code of Practice on November 14, 2024. The Code is intended to assist providers of AI models in their preparations for compliance with the forthcoming EU AI Act, to be enforced from August 2, 2025. The Code is designed to be both forward-thinking and globally applicable, addressing the areas of transparency, risk evaluation, technical safeguards and governance. While adherence to the Code is not mandatory, it is anticipated to serve as a means of demonstrating compliance with the obligations under the EU AI Act. Following a consultation period that garnered approximately 430 responses, the AI Office will be empowered to apply these rules, with penalties for nonconformity potentially reaching 3% of worldwide turnover or €15 million. Three additional iterations of the Code are anticipated to be produced within the coming five months.

...

Read More

Data Dive

November 15, 2024

On October 29, 2024, the DOJ issued a proposed rule prohibiting and restricting certain transactions that could allow persons from countries of concern, such as China, access to bulk sensitive personal data of U.S. citizens or to U.S. government-related data (regardless of volume).

...

Read More

Data Dive

October 17, 2024

During the course of any lending transaction, lenders will conduct a due diligence review of the borrower, including reviewing any relevant “know-your-customer” information.

...

Read More

Data Dive

September 17, 2024

Following the publication of the European Union’s Artificial Intelligence Act (AI Act or Act) on 12 July 2024, there are now a series of steps that various EU bodies need to take towards implementation. One of the first key steps is in relation to the establishment of codes of practice to “contribute to the proper application” of the AI Act.

...

Read More

Data Dive

August 6, 2024

On July 30, 2024, the Senate passed the Kids Online Safety and Privacy Act (S. 2073) via an overwhelmingly bipartisan vote of 91-3 shortly before departing for the August recess.

...

Read More

Data Dive

July 18, 2024

On 12 July 2024, the European Union Artificial Intelligence Act (AI Act or Act) was published in the Official Journal of the European Union (EU), marking the final step in the AI Act’s legislative journey. Its publication triggers the timeline for the entry into force of the myriad obligations under the AI Act, along with the deadlines we set out below. The requirement to ensure a sufficient level of AI literacy of staff dealing with the operation and use of AI systems will, for example, apply to all providers and deployers on 2 February 2025.

...

Read More

Data Dive

July 18, 2024

On June 18, 2024, the United States Securities and Exchange Commission (SEC) announced a settlement with R.R. Donnelley & Sons Company (RRD) for alleged internal control and disclosure failures following a ransomware attack in 2021. Without admitting or denying the SEC’s findings, the business communications and marketing services provider agreed to pay a civil penalty of over $2.1 million to settle charges alleging violations of Section 13(b)(2)(B) of the Securities Exchange Act of 1934 (Exchange Act) and Exchange Act Rule 13a-15(a).1

...

Read More

Data Dive

June 11, 2024

In May, the National Institute of Standards and Technology (NIST) issued updated recommendations for security controls for controlled unclassified information (CUI) that is processed, stored or transmitted by nonfederal organizations using nonfederal systems, (NIST Special Publication 800-171 (SP 800-171), Revision 3). These security requirements are “intended for use by federal agencies in contractual vehicles or other agreements that are established between those agencies and nonfederal organizations.”1 While these new controls are only applicable to nonfederal entities that agree to comply with the new issuance, Revision 3 signals the next phase of expected security for government contractors.

...

Read More

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.