Seyfarth Synopsis: On May 17, 2024, Colorado Governor Jared Poils signed Colorado SB 24-205, making Colorado the first state to enact broad legislation regulating employers’ use of AI to make “consequential decisions.” Colorado SB 24-205 broadly covers a wide range of AI applications, including employment, and incentivizes AI risk management practices that have long been the subject of discussion by regulators at the federal, state, and international level. For many applications, the bill requires impact assessments, consumer-facing disclosures, and notifications regarding Colorado residents’ ability to opt-out.
This Seyfarth Client Update has been edited following its publication to reflect Governor Polis’ signing of Colorado SB 205.
Colorado SB 24-205 Was Signed Into Law on May 17 Over Business Group Objections
Governor Polis signed Colorado SB 24-205, “Concerning Consumer Protections in Interactions With Artificial Intelligence Systems, on May 17, 2024 “with reservations.” His signing statement acknowledges that SB 205 “creates a complex compliance regime for all developers and deployers of AI doing business in Colorado… This bill is among the first in the country to attempt to regulate the burgeoning artificial intelligence industry on such a scale.”
In his signing statement, Governor Polis further expresses concern “about the impact this law may have on an industry that is fueling critical technological advancements” and observed how regulation – like SB205 – “applied at the state level in a patchwork across the country can have the effect to tamper innovation and deter competition in an open market”.
Accordingly, in his signing statement, Governor Polis encourages the bill’s sponsors “to significantly improve” their approach before SB 205 takes effect, and specifically calls on the federal government to enact federal legislation that would preempt the bill he just signed, replacing it “with a needed cohesive federal approach.” He further called on Colorado legislators and stakeholders to “fine tune the provisions and ensure that the final product does not hamper development and expansion of new technologies” and to craft future legislation “that will amend [SB 24-205] to conform with evidence based findings and recommendations.”
Business groups had previously urged Governor Polis to veto the bill.
Despite Governor Polis’ call for preemptive federal AI legislation, it is unlikely that consensus will rapidly emerge on the federal level. On May 15, 2024, Senate Majority Leader Chuck Schumer, along with members of a bipartisan Senate AI working group, issued a 31-page “Roadmap for Artificial Intelligence (AI) Policy in the United States Senate.” While the Senate AI working group pushes for $32 billion in AI investments by the federal government, it continues a path of legislative exploration “of the implications and possible solutions (including private sector best practices)” to the challenges of AI in the workforce. Specific substantive legislative proposals regarding employment issues did not emerge from the bipartisan Senate AI working group, and consensus over legislative proposals that explicitly preempt state laws like Colorado SB 205 will present numerous challenges. Thus, comprehensive federal AI legislation addressing employment issues is unlikely to materialize in the near term.
This leaves us with state efforts, like Colorado SB 205. This legislative season, legislators in states other than Colorado have introduced bills with specific provisions regulating the use of AI in hiring and other employment applications, including in Connecticut (SB 2), California (AB 2390), New York (S5641A), Illinois (HB 5116), Rhode Island (H 7521), and Washington (HB 1951), among others. While Connecticut’s SB 2 passed the Connecticut Senate on April 24, 2024, after Connecticut Governor Ned Lamont threatened to veto the bill, SB 2 was not taken up in the Connecticut House before the end of the legislative session, ending its chances of passage this year. Of the remaining state bills specifically seeking to regulate the use of AI in employment, Colorado SB 205 is the first to make it across the finish line, although the opportunity still remains for some state legislatures to advance the issue this year.
Colorado SB 205’s Broad Scope
Importantly, in its final form, Colorado SB 205 establishes requirements for both developers and deployers of “high-risk” AI systems. The bill defines these as AI systems that make or significantly influence “consequential decisions” in areas such as employment, housing, credit, education, and healthcare. Notably, SB 24-205’s definition of “consequential decisions” in the employment context broadly includes decisions that have a “material legal or similarly significant effect on the provision or denial to any consumer of … employment or an employment opportunity.” This language potentially opens the door to a wide range of employment practices beyond hiring, promotion, or termination, such as claims related to performance management, disciplinary action, or even workplace surveillance.
As a result, Colorado SB 205 could potentially chill the use of AI applications in various employment applications beyond sourcing, selection, and termination.
Colorado Consumer Disclosures Under SB 205
The disclosure requirements in Colorado SB 205 apply broadly to Colorado consumers affected by AI-driven consequential decisions, encompassing both Colorado consumers in the traditional sense and Colorado residents applying for jobs.
Under Colorado SB 205, deployers of high-risk AI systems must provide “consumers,” including job applicants, with information regarding their right to opt out of the processing of their personal data, consistent with Colorado’s existing privacy law. Additionally, the bill requires AI deployers using covered systems to notify Colorado consumers, including job applicants, that such a system is being used, disclose the purpose and nature of the decision, provide contact information for the deployer, and offer a plain-language description of the AI system.
Colorado SB 205 also includes provisions related to AI explainability and transparency. It requires AI deployers to provide Colorado “consumers,” including job applicants, with a statement disclosing the principal reasons for any adverse consequential decision made by a high-risk AI system, including the degree and manner in which the AI contributed to the decision, the type and source of data processed in making the decision, and an opportunity to correct any personal data used. Covered AI deployers are also required to post a public statement on their website, summarizing the types of high-risk AI systems they currently deploy, how they manage risks of algorithmic discrimination, and the nature, source, and extent of information collected and used. However, the bill provides exceptions to these disclosure requirements for certain deployers, such as those with fewer than 50 employees who meet specific criteria, and allows for the withholding of trade secrets or information protected from disclosure by law.
The bill also requires that when an AI system is intended to interact with Colorado consumers, the deployer or developer must disclose to each consumer that they are interacting with an AI system unless it would be obvious to a reasonable person that they are interacting with AI. The required notices, statements, contact information, and descriptions must be provided directly to consumers in plain language, in the same languages used by the deployer in the ordinary course of business, and in a format accessible to consumers with disabilities.
The bill also provides Colorado consumers with the right to correct erroneous personal data used in making consequential decisions, as well as the right to appeal adverse decisions. Colorado SB 205 also provides that, “when technically feasible”, the appeal process must allow for human review.
Business groups have already expressed concern that these requirements will increase their compliance costs and chill the use of AI in Colorado.
Impact Assessments and Developer Disclosures
Colorado SB 205 also requires AI deployers to conduct impact assessments for high-risk AI systems. While the concept of impact assessments is not new – among other things, impact assessments will be required under the recently passed EU AI Act, and will be conducted as part of the federal government’s own internal AI risk management processes, recently finalized by the Office of Management and Budget – Colorado SB 205 includes additional specific items that an impact assessment must include. Among other things, it requires the impact assessment to include:
- A description of the categories of data the system processes as inputs and the outputs it produces;
- An overview of the categories of data used to customize the system, if applicable;
- The metrics used to evaluate the system’s performance and known limitations; and
- A description of transparency measures taken, including measures to disclose the use of the system to consumers.
Consideration of the data used to “train” AI is consistent with how many regulators are now talking about the risks of AI. (For further discussion of comments at the federal level, see our March 25, 2024 update on comments about AI by EEOC Chair Charlotte Burrows and Solicitor of Labor Seema Nanda.)
Colorado SB 205 also requires AI developers to provide AI deployers with key information about their high-risk AI systems. This includes a general statement describing the AI system’s reasonably foreseeable uses and known harmful or inappropriate uses, as well as documentation disclosing “high-level summaries of the type of data” used to train the system, the AI system’s known limitations and risks of algorithmic discrimination, and its purpose and intended benefits. AI developers must also provide deployers with documentation on how the system was evaluated for performance and bias mitigation before deployment, the data governance measures used, and the steps taken to mitigate known or foreseeable risks of algorithmic discrimination.
Incentivizing AI Risk Management Practices
Colorado SB 205 requires AI developers and AI deployers to exercise “reasonable care” to protect Colorado consumers from “reasonably foreseeable risks of algorithmic discrimination.” To achieve this goal, it incentivizes various AI risk management practices that have long been the subject of discussion at the federal level. While Colorado SB 205 grants to the Colorado Attorney General sole enforcement authority for violations of the law, it creates a rebuttable presumption that AI deployers and AI developers have exercised reasonable care if they have implemented certain risk management practices that are closely aligned with the NIST AI Risk Management Framework or other risk management frameworks that the Colorado Attorney General’s office might identify in its rulemaking efforts. These practices closely align with the Department of Labor’s “promising practices” regarding AI, and other guidance recently issued by the EEOC and OFCCP regarding employers’ use of AI, as well as the White House’s recently issued directives to federal agencies for the federal government’s own use of AI.
The bill also provides an affirmative defense for employers who discover and cure a violation as a result of proactive measures such as user feedback, adversarial testing or red-teaming, or internal reviews, and who demonstrate compliance with these recognized AI risk management frameworks. This combination of a rebuttable presumption and an affirmative defense creates strong incentives for employers to prioritize AI risk management and to proactively identify and address potential issues.
Implications for Employers
Colorado SB 205’s rapid passage and Governor Polis’ signing of the bill “with reservations” demonstrates ongoing support for state legislative action. While Colorado is now the first state to enact such broad-based AI legislation, it may not be the last, as the core concepts behind Colorado SB 205 are likely to resurface in other states. (Among other things, the sponsor of Connecticut SB 2 has vowed to reintroduce the legislation in the next session.)
Employers using or considering using AI in their employment processes should evaluate their current AI risk management practices against the requirements of Colorado SB 205 and consider whether enhancements are necessary to align their current practices with these emerging expectations.
We will continue to closely monitor the status of Colorado SB 205 and Colorado’s efforts to implement or clarify its provisions, and other developments related to AI regulation. For additional information, please contact the authors of this alert, a member of Seyfarth’s People Analytics team, or any of Seyfarth’s attorneys.
“With approximately 900 lawyers across 17 offices, Seyfarth Shaw LLP provides advisory, litigation, and transactional legal services to clients worldwide.”
Please visit the firm link to site