After more than three years of drafting, hearings, and revisions, the California Privacy Protection Agency (CPPA) has finalized its Automated Decision-Making Technology (ADMT) regulations under the California Consumer Privacy Act. The rules — covering a defined set of "significant decisions" in employment, housing, lending, insurance, healthcare, education, and access to essential goods and services — phase in through 2026 and 2027. They mark the most consequential algorithmic-accountability regime yet enacted in the United States, and, perhaps more importantly, the first in which a regulator visibly listened to its critics.
The final text is meaningfully narrower than the drafts circulated in 2023 and 2024. The CPPA dropped a controversial requirement to publish standalone risk assessments, scaled back the definition of "extensive profiling" so it no longer sweeps in routine personalization, and clarified that the rules apply to systems making or substantially replacing human judgment in consequential domains — not to every model with a recommendation engine. The retreat followed sustained pushback from the California Chamber of Commerce, technology trade groups, and an unusually direct public letter from Governor Gavin Newsom's office urging the Board to avoid duplicating a federal AI agenda that, after the 2024 election, was already in flux.
What the rules actually require
For businesses making qualifying "significant decisions" about California consumers using ADMT, three core obligations now apply:
- Pre-use notice. Consumers must be told before the technology is used, what it does, and how it factors into the decision.
- Opt-out rights. Consumers can refuse purely automated processing in many contexts, with carve-outs for safety, fraud prevention, and certain employment screening where alternatives are impractical.
- Access on request. Consumers can ask how the system reached its outcome in their case — a meaningful but bounded transparency right that stops short of forcing disclosure of full model internals or training data.
This is recognizably the European GDPR Article 22 lineage, but with American calibration. The CPPA resisted calls to mandate algorithmic impact assessments be published as public documents, recognizing that this would compress competitive differentiation without obviously improving consumer outcomes. The result is closer to a notice-and-rights framework than a licensing regime — which is the right instinct.
The patchwork problem just got harder
California's announcement does not arrive in isolation. Colorado's Artificial Intelligence Act (SB 24-205), signed by Governor Polis in 2024, takes effect February 1, 2026, imposing duties on developers and deployers of "high-risk" AI systems and creating a private cause of action enforced by the state attorney general. Texas's Responsible AI Governance Act (HB 149), signed by Governor Abbott in June 2025, layers a separate regime with its own definitions and triggers. New York City's Local Law 144 on automated employment decision tools has been live since 2023. Illinois, New Jersey, and Virginia all have proposals in advanced stages.
For a mid-sized SaaS company offering an HR analytics product to customers in fifteen states, the cost of compliance is no longer the substantive rules — most are reasonable on their own terms — but the cost of mapping them. Definitions of "automated decision-making," "significant decision," "high-risk," and "profiling" diverge state by state. Opt-out mechanics, notice timing, and assessment cadences differ. Penalty structures vary by orders of magnitude.
The danger is not regulation itself but regulatory entropy: rules that are individually defensible but collectively impose a tax on building, especially for startups that cannot maintain a 50-state compliance team.
The federal vacuum is now structural
President Trump's January 2025 rescission of Executive Order 14110 and the subsequent rollout of the America's AI Action Plan with its preemption-leaning posture have left a vacuum that states are filling on a first-mover basis. The administration has signaled interest in narrowing state authority over frontier model development, but Congress has not acted, and the courts have yet to be tested on whether algorithmic-accountability rules of general applicability — like California's — would survive a preemption challenge.
This is the wrong outcome for everyone. Industry groups that fought hardest against EO 14110 are now navigating a 50-state map that is plainly more onerous than a single, well-designed federal floor would have been. Consumer advocates who celebrated state activism are watching enforcement budgets stretch thin across regulators with overlapping jurisdiction.
What good policy looks like from here
The CPPA's willingness to narrow its rules in response to evidence and stakeholder input is the model. Three principles should guide what comes next:
- Risk-calibrated scope. Algorithmic accountability rules should bite hardest where decisions are consequential and irreversible — denial of housing, credit, employment — and tread lightly on lower-stakes personalization. California's final scope reflects this; other states should follow.
- Interoperability, not uniformity. A federal floor — perhaps modeled on a narrowed version of the bipartisan American Privacy Rights Act framework — would let states experiment above it without forcing companies to satisfy a dozen incompatible definitions.
- Process rights over outcome mandates. Requiring notice, opt-outs, and individualized explanation is durable. Requiring specific accuracy thresholds or audit methodologies risks freezing methods before the field has matured.
California's ADMT rules will be remembered less for what they require than for what they declined to require. That restraint is worth defending — and worth replicating, both in the next state to legislate and, eventually, in Washington.