On February 1, 2026, Colorado became the first US state to switch on a comprehensive algorithmic accountability regime. Senate Bill 24-205, the Colorado Artificial Intelligence Act, now requires developers and deployers of "high-risk" AI systems to exercise reasonable care to prevent algorithmic discrimination in consequential decisions — those affecting employment, lending, housing, education, healthcare, insurance, and government services. Impact assessments, disclosures to consumers, and notifications to the state attorney general are all now legally mandated.
The law's ambition is real. Algorithmic decision-making does shape who gets a job interview, a mortgage approval, or a rental application. In a federal vacuum where Congress has repeatedly failed to pass meaningful AI legislation, state action is understandable. But Colorado's experiment is already exposing the gap between regulatory ambition and operational reality — and even Governor Jared Polis, who signed the bill in May 2024, attached an unusually candid signing statement warning that the law was "imperfect" and urging the legislature to refine it before it took effect.
What the law actually requires
The Colorado AI Act borrows heavily from the EU AI Act's risk-tiered approach. A "high-risk artificial intelligence system" is any AI that, when deployed, makes or is a substantial factor in making a consequential decision. For developers, the duties include disclosing the system's purpose, training data characteristics, known limitations, and reasonably foreseeable risks of discrimination. For deployers, the duties include implementing a risk management policy, conducting annual impact assessments, notifying consumers when AI is used in adverse decisions, and offering an opportunity to appeal or correct data.
Enforcement sits exclusively with the Colorado Attorney General — there is no private right of action, which is a meaningful concession to industry. Violations are treated as unfair trade practices under the Colorado Consumer Protection Act, and the law includes an affirmative defense for entities that comply with recognized risk management frameworks such as the NIST AI Risk Management Framework.
Where the law gets it right
Several design choices deserve credit. The reliance on NIST's voluntary framework as a compliance safe harbor is sensible — it aligns state law with an emerging federal technical standard rather than inventing a parallel one. The AG-only enforcement model avoids the class-action lottery that has plagued state privacy laws like Illinois's BIPA. And the law's explicit carve-outs for low-risk uses — including spam filters, anti-fraud detection, cybersecurity tooling, and basic productivity software — show legislators understood that not every line of inference is a civil rights event.
The transparency obligations are also broadly defensible. Consumers facing an adverse consequential decision arguably should know an algorithm was involved and have a path to contest the outcome. That principle is not controversial.
Where the law overreaches
The problems lie in scope and ambiguity. "Substantial factor" is undefined. "Algorithmic discrimination" is defined to include any differential treatment that disfavors a protected class — a standard that could capture disparate-impact outcomes even where the model itself is facially neutral and predictive of legitimate business criteria. The definition of "developer" sweeps in anyone who "intentionally and substantially modifies" an AI system, which could mean a small business that fine-tunes an open-source model on its own data now inherits developer-level disclosure duties.
The compliance cost curve is steepest for the smallest players. A large bank or insurer already has model risk management infrastructure under OCC SR 11-7 or NAIC bulletins. A 20-person fintech startup using a third-party scoring API does not. Industry trade groups have estimated annual compliance costs in the high six figures for mid-sized deployers — a number that, even if inflated, points to a real moat being built around incumbents.
The patchwork problem is already here
Colorado is not alone. California, Connecticut, New York, Texas, and Virginia have all introduced similar bills, with varying definitions of "high-risk" and varying enforcement structures. A national lender deploying a single underwriting model could soon face overlapping and inconsistent obligations across jurisdictions. This is exactly the patchwork that the US Chamber of Commerce, the Software & Information Industry Association, and the Center for Democracy & Technology have all warned about — strange bedfellows agreeing that fifty AI codes will not serve consumers or innovators.
The right response to algorithmic harm is not no regulation. It is targeted, technology-specific, federally coordinated regulation that focuses on actual decision contexts — credit, employment, housing — where existing civil rights law already applies.
A path forward
Governor Polis's task force has already proposed amendments to narrow the definition of "substantial factor," clarify the developer-deployer line, and extend the effective date for small businesses. Those are sensible fixes. Better still would be congressional action that preempts state-level AI codes with a coherent federal framework — something the NIST AI RMF, the EEOC's existing guidance on algorithmic hiring tools, and the CFPB's adverse action rules already partially provide.
Colorado deserves credit for taking algorithmic accountability seriously when Washington would not. But the law as written risks becoming a cautionary tale: a regime that imposes heavy paperwork on responsible actors while doing little to catch the bad ones. The next twelve months of enforcement will reveal whether the AG's office can apply the statute with proportionality — or whether the AI Act becomes Exhibit A in the case for federal preemption.