Colorado Enacts Groundbreaking AI Consumer Protection Legislation

May 30, 2024

Reading Time : 10+ min

Key Points

  • Colorado's new Al law creates new obligations for developers and deployers of high-risk artificial intelligence (Al) systems.
  • Similar to the EU Al Act, the law is risk-based and defines a "high-risk" Al system as one making, or a substantial factor in making a consequential decision in specific categories.
  • Developers and deployers or high-risk Al systems are required to use reasonable care, including impact assessments, to avoid discrimination. There is a rebuttable presumption of reasonable care for developers and deployers that comply with specified requirements.
  • Deployers will be tasked with ensuring that consumers are adequately notified when interacting with Al or high-risk Al is used to make decisions about them.
  • Following its signing by Governor Jared Polis, the law becomes effective on February 1, 2026.

Overview

On May 17, 2024, Colorado Governor Jared Polis signed into law S.B. 205, a pioneering piece of legislation aimed at regulating high-risk AI systems. This new law, set to take effect on February 1, 2026, introduces stringent requirements for AI developers and deployers, focusing on risk management and the prevention of algorithmic discrimination. This legislation marks a significant step in state-level AI regulation, potentially setting a precedent similar to the impact of GDPR on privacy laws.

Scope & Definitions

The law’s definition of “AI system” is broad, encompassing “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”1 The law applies to persons conducting business in Colorado that are either “deployers” who use high-risk AI systems, or “developers” who develop or intentionally and substantially modify AI systems.2 Intentional and substantial modifications are designed as deliberate changes made to AI systems that lead to any new reasonably foreseeable risk of algorithmic discrimination, but do not include changes to high-risk AI systems that continue to learn after deployment or being made available to a deployer.3

High-risk AI systems under the law refer to any AI system that “when deployed, makes, or is a substantial factor in making, a consequential decision.”4 According to the law, a consequential decision means any decision that has a “materially legal or similarly significant effect” on providing or denying a consumer (or the cost or terms of) the following: (i) education opportunities or enrollment; (ii) employment or employment opportunities (iii); financial or lending services; (iv) essential government services; (v) healthcare services; (vi) housing; (vii) insurance or (viii) legal services.5 These categories overlap with, but are not identical to those found in the EU AI Act.

The law also regulates “algorithmic discrimination” by restricting any condition where the use of an AI system results in “unlawful differential treatment or impact” that disfavors an individual or group on the basis of the following (actual or perceived) characteristics: (i) age; (ii) color; (iii) disability; (iv) ethnicity; (v) genetic information; (vi) limited English language proficiency; (vii) race or national origin; (viii) religion; (ix) reproductive health; (x) sex; (xi) veteran status; (xii) or other protected classification under Colorado state law or federal law.6

Requirements for AI Developers

The law will require developers of high-risk AI systems to use “reasonable care” to protect consumers from any known or foreseeable risks of algorithmic discrimination resulting from the intended and contracted uses of those high-risk AI systems.7 There is a rebuttable presumption that a developer used reasonable care if they complied with the law’s requirements and any additional requirements that the Colorado Attorney General (AG) may promulgate.8

Mandatory Risk Documentation

Developers of high-risk AI systems will be required to make certain items available to either deployers or other developers of those high-risk AI systems, namely:

  • A general statement that describes the reasonably foreseeable uses and known harmful (or inappropriate) uses of the high-risk AI system.
  • Documents that disclose: (i) high-level summaries of the type of training data, (ii) known or reasonably foreseeable limitations of the high-risk AI system, (iii) the purpose of the high-risk AI system, (iv) intended benefits and uses and (v) any other information necessary for the deployer to comply with the bill’s requirement.
  • Documents that describe: (i) how performance evaluations and mitigation of discrimination for the high-risk AI system was completed prior to release, (ii) data governance measures covering training data and how examinations were conducted for data suitability, possible bias and mitigation, (iii) intended outputs of the high-risk AI system, (iv) steps the developer took to mitigate known or reasonably foreseeable risks of algorithmic discrimination that might result from the reasonably foreseeable deployment of the high-risk AI system, (v) how the high-risk AI system should be used, how it should not be used and how it should be monitored when used to make (or is a substantial factor in making) a consequential decision.
  • Any additional documents reasonably necessary for the deployer to understand the high-risk AI system’s output and monitor for risk of algorithmic discrimination.9

The AG can request this information from developers of high-risk AI systems, who would then be required to disclose within 90 days after the request.10 

Facilitation of Impact Assessments

The law will require developers who make a high-risk AI system available to a deployer or other developer, to provide information necessary to complete impact assessments of the high-risk system.11 These developers also have an obligation to post certain information to their website (or in a public use case inventory), specifically:

  • The types of high-risk AI systems that the developer has developed, or intentionally and substantially modified, and makes available to deployers or other developers.
  • How the developer manages known or reasonably foreseeable risks of algorithmic discrimination, which could result from the development or intentional and substantial modification of the types of high-risk AI systems the developer posts to their website.12

Under the law, these developers must keep this information accurate, including updating it no later than 90 days after intentionally and substantially modifying any of these high-risk AI systems.13

Risk Disclosure

Developers of a high-risk AI system must disclose any known or reasonably foreseeable risks of algorithmic discrimination resulting from intended uses of the high-risk AI system to the AG, as well as all known deployers or other developers of the high-risk AI system. These disclosures will be required as of February 1, 2026, and additional disclosures will need to be made no later than 90 days after the following events:

  1. The developer’s ongoing testing uncovers that the high-risk AI system has been deployed and caused or is reasonably likely to have caused algorithmic discrimination; or
  2. The developer receives a credible report from a deployer that the high-risk AI system has been deployed and has caused algorithmic discrimination.14

Requirements for AI Deployers

The law will require deployers of high-risk AI systems to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. As with developers, there is a rebuttable presumption that deployers of high-risk AI systems used reasonable care if they comply with the law’s requirements and any additional rules from the AG.15

Risk Management

Deployers of high-risk AI systems will have a duty to implement a risk management policy and program to govern deployment of high-risk systems. This policy and program, which may cover multiple deployed high-risk AI systems, must list the principles, processes and personnel the deployer uses to document and mitigate the risks of algorithmic discrimination. The risk management policy and program will be an “iterative process” subject to regular review and updates over the lifecycle of the high-risk AI system, consistent with existing best practices and evolving standards.16 The law specifies that risk management policies and programs “must be reasonable” considering the following:

  • The latest AI Risk Management Framework (RMF) guidance17 from the National Institute of Standards and Technology (NIST), or any risk management framework for AI systems that the AG may designate.
  • The size and complexity of the deployer.
  • The nature and scope of the high-risk AI systems deployed, including their intended uses.
  • The sensitivity and volume of data processed in connection with the high-risk AI systems being deployed.18

Impact Assessments

Deployers of high-risk AI systems, or their third-party contractors, will be required to complete impact assessments for their high-risk AI systems. These impact assessments must be completed at least annually, and additionally within 90 days after making available any intentional and substantial modification to the high-risk AI system.19 Impact assessments will include the following:

  • A statement documenting deployer’s statement of purpose, intended use cases, deployment context and benefits of the high-risk AI system.
  • An analysis of whether the deployment of the high-risk AI system poses any known or reasonably foreseeable risks of algorithmic discrimination, along with mitigating steps taken.
  • A description of the categories of data the high-risk AI system processes as inputs, as well as the outputs it produces;
  • Whether the deployer used data to customize the high-risk AI system, and an overview of the categories of data used to customize.
  • Any performance evaluation metrics and known limitations of the high-risk AI system.
  • A description of any transparency measures taken, including measures taken to disclose to a consumer when the high-risk AI system is in use.
  • A description of post-deployment monitoring and user safeguards, including the oversight, use and learning process the deployer established to address issues resulting from the deployment of the high-risk AI system.20

When deployers of high-risk AI systems make an intentional and substantial modification to a high-risk AI system, they will be required to disclose the extent to which that system was used in a manner consistent with, or varied from the intended use disclosed in the impact assessment.21 A deployer must conduct, or hire a third party to conduct an annual review of each high-risk AI system deployed to ensure the systems are not causing algorithmic discrimination.22 Deployers are also required to maintain their most recently completed impact assessment, all records for each impact assessment and all prior impact assessments for at least three years after the final deployment of the high-risk AI system.23

Consumer Transparency

Deployers of high-risk AI systems will be subject to consumer notification requirements under the law. When a high-risk AI system makes, or is a substantial factor in making a consequential decision about a consumer, the deployer must:

  • Notify the consumer that a high-risk AI system has been deployed to make, or be a substantial factor in making a consequential decision before the decision is made.
  • Provide the consumer with a statement explaining the purpose of the high-risk AI system, the nature of the consequential decision, the contact information for the deployer, a description of the high-risk AI system and instructions for how the consumer can access more information about the high-risk AI system on the deployer’s website.
  • Provide the consumer with information on their right to opt out of certain processing of personal data concerning the consumer for purposes of profiling, for decisions producing legal or similar significant effects.24

In the event the deployer’s high-risk AI system made or was a substantial factor in making a consequential decision that is adverse to the consumer, that deployer must provide the following to that consumer:

  1. A statement on the principal reason or reasons for the consequential decision, including the degree and manner in which the high-risk AI system contributed to the decision, the type of data processed by the high-risk AI system and the source or sources for that data.
  2. An opportunity to correct any incorrect personal data the high-risk AI system processed in making, or as a substantial factor in making the consequential decision.
  3. An opportunity to appeal an adverse, consequential decision concerning the consumer resulting from the deployment of a high-risk AI system.25

The required consumer notice, statement and contact information must be provided directly to the consumer in plain language. Deployers will also be required to publish statements on their websites summarizing the types of high-risk AI systems they deploy, how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may result from deploying the high-risk AI systems and the details of the nature, source and extent of information collected and used by the deployer.26

The law also includes a general requirement to notify consumers about AI interactions. Deployers or developers that deploy, sell or otherwise make available an AI system (not just one that is high-risk) that is intended to interact with consumers must disclose that the consumer is interacting with an AI system, unless it would be obvious to a reasonable person.27 Utah similarly requires consumer-facing AI to—upon request—clearly and conspicuously disclose that the user is interacting with “generative artificial intelligence and not a human.”28 

Discrimination Reporting

If a deployer of a high-risk AI system finds that the system has caused algorithmic discrimination, that deployer has 90 days following the date of discovery to inform the AG.29 The AG may also request deployers or their third-party contractor to disclose, within 90 days, their risk management policy, completed impact assessment or their required records for impact assessments.30

Exemption for Certain AI Deployers

The law includes a limited carve-out for some deployers of high-risk AI systems, exempting them from risk management and impact assessment requirements. The requirements for risk management and impact assessments do not apply if deployers deploy a high-risk AI system and during that time:

  • Employ fewer than 50 full-time-equivalent employees.
  • Do not use the deployer’s own data to train the high-risk AI system.
  • The high-risk AI system is used for the intended uses disclosed to the deployer.
  • The high-risk AI system continues learning based on data derived from sources other than the deployer’s own data.
  • The deployers make impact assessments available to consumers that the deployer of the high-risk AI system completed, which include information that is substantially similar to the information in the law’s required high-risk AI system deployer impact assessment.31

Enforcement

The law does not include a private right of action and will be exclusively enforced by the AG. Violations of the law’s requirements constitute an unfair trade practice under state law,  and the developer, deployer or other person bears the burden of demonstrating compliance with requirements.32 To demonstrate compliance, the law provides an affirmative defense to developers, deployers or other persons that:

  • Find and cure violations using either feedback that they encouraged deployers or users to provide, adversarial testing or red teaming,33 or an internal review process.
  • Is in compliance with the latest RMF published by NIST, another substantially equivalent nationally recognized risk management framework for AI systems or any risk management framework for AI systems that the AG may designate.34

The law also provides that the AG may add additional rules necessary for implementation and enforcement.35

Key Takeaways

Colorado is one of the first states to enact an AI law with comprehensive consumer protections. The law is a logical extension of existing consumer protections, with a focus on preventing consumer harm from high-risk AI and ensuring that consumers are aware when AI is being used. The requirements embody best practices and overlap with existing privacy frameworks and regulations. As seen with other privacy and consumer protection laws, pioneering legislation, like those from Colorado and Utah, provide a model for other states to follow.  

The Akin cross-practice AI team continues to advise clients on navigating the evolving AI regulatory landscape and will closely track state and federal efforts to regulate AI, as well as the resulting opportunities for industry engagement, and keep clients apprised of key developments.


1 S.B. 24-205, 74thGen. Assemb., Reg. Sess. (CO. 2024) § 1701(2).

2 Id. at § 1701(5–7).

3 Id. at § 1701(10). This does not include changes made as a result of AI learning post-deployment, provided the change was predetermined in an initial impact assessment by the deployer or the deployer’s third party contractor and properly documented.

4 Id. at § 1701(9)(a). A “substantial factor” is generated by an AI system, assists in making a consequential decision, can alter the outcome of a consequential decision, and includes any use of AI to generate content, decisions, predictions or recommendations about a consumer that is used to make consequential decisions about that consumer. Id. at § 1701(11).

5 Id. at § 1701(3).

6 Id. at § 1701(1)(a).

7 Id. at § 1702(1).

8 Id.

9 Id. at § 1702(2).

10 Id. at § 1702(7).

11 Id. at § 1702(3)(a), this information can be provided through items like model cards, dataset cards, or other impact assessments.

12 Id. at § 1702(4)(a).

13 Id. at § 1702(4)(b).

14 Id. at § 1702(5).

15 Id. at § 1703(1).

16 Id. at § 1703(2)(a).

17 Current guidance as of publication: https://www.nist.gov/itl/ai-risk-management-framework

18 Id. at § 24-205 at § 1703(2)(a) 

19 Id. at § 1703(3)(a).

20 Id. at § 1703(3)(b), Reasonably similar impact assessments completed under other regulations may also be used.

21 Id. at § 1703(3)(c).

22 Id. at § 1703(3)(g).

23 Id. at § 1703(3)(f).

24 Id. at § 1703(4)(a).

25 Id. at § 1703(4)(b), this appeal must allow for human review if feasible, unless providing the opportunity for appeal is not in the best interest of the consumer, such as where delay might pose a safety risk to the consumer.

26 Id. at § 1703(5)(a).

27 Id. at § 1704(1-2).

28 Utah S.B. 149 § 13-2-12(3).

29 Colorado S.B. 24-205 at § 1703(7).

30 Id. at § 1703(9).

31 Id. at § 1703(6).

32 Id. at § 1706(1-2), (4), (6).

33 As defined by NIST.

34 Id. at § 1703(2).

35 Id. at § 1707(1).

Share This Insight

Previous Entries

Data Dive

November 19, 2024

The European Union’s AI Office published the inaugural General-Purpose AI Code of Practice on November 14, 2024. The Code is intended to assist providers of AI models in their preparations for compliance with the forthcoming EU AI Act, to be enforced from August 2, 2025. The Code is designed to be both forward-thinking and globally applicable, addressing the areas of transparency, risk evaluation, technical safeguards and governance. While adherence to the Code is not mandatory, it is anticipated to serve as a means of demonstrating compliance with the obligations under the EU AI Act. Following a consultation period that garnered approximately 430 responses, the AI Office will be empowered to apply these rules, with penalties for nonconformity potentially reaching 3% of worldwide turnover or €15 million. Three additional iterations of the Code are anticipated to be produced within the coming five months.

...

Read More

Data Dive

November 15, 2024

On October 29, 2024, the DOJ issued a proposed rule prohibiting and restricting certain transactions that could allow persons from countries of concern, such as China, access to bulk sensitive personal data of U.S. citizens or to U.S. government-related data (regardless of volume).

...

Read More

Data Dive

October 17, 2024

During the course of any lending transaction, lenders will conduct a due diligence review of the borrower, including reviewing any relevant “know-your-customer” information.

...

Read More

Data Dive

September 17, 2024

Following the publication of the European Union’s Artificial Intelligence Act (AI Act or Act) on 12 July 2024, there are now a series of steps that various EU bodies need to take towards implementation. One of the first key steps is in relation to the establishment of codes of practice to “contribute to the proper application” of the AI Act.

...

Read More

Data Dive

August 6, 2024

On July 30, 2024, the Senate passed the Kids Online Safety and Privacy Act (S. 2073) via an overwhelmingly bipartisan vote of 91-3 shortly before departing for the August recess.

...

Read More

Data Dive

July 18, 2024

On 12 July 2024, the European Union Artificial Intelligence Act (AI Act or Act) was published in the Official Journal of the European Union (EU), marking the final step in the AI Act’s legislative journey. Its publication triggers the timeline for the entry into force of the myriad obligations under the AI Act, along with the deadlines we set out below. The requirement to ensure a sufficient level of AI literacy of staff dealing with the operation and use of AI systems will, for example, apply to all providers and deployers on 2 February 2025.

...

Read More

Data Dive

July 18, 2024

On June 18, 2024, the United States Securities and Exchange Commission (SEC) announced a settlement with R.R. Donnelley & Sons Company (RRD) for alleged internal control and disclosure failures following a ransomware attack in 2021. Without admitting or denying the SEC’s findings, the business communications and marketing services provider agreed to pay a civil penalty of over $2.1 million to settle charges alleging violations of Section 13(b)(2)(B) of the Securities Exchange Act of 1934 (Exchange Act) and Exchange Act Rule 13a-15(a).1

...

Read More

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.