US artificial intelligence regulation

America's AI Patchwork Problem: Why Washington Must Lead, Not Lag

As state legislatures multiply incompatible AI rules, the case for a proportionate federal framework with risk-tiered obligations and clear preemption grows.

America's Fragmented AI Rulebook (May 2026) People of Internet Research · US All 50 States introducing AI bills 2025 Every state plus DC, Puerto Rico a… Feb 2026 Colorado AI Act effective SB 24-205 imposes impact-assessmen… 28+ States enacting AI measures Roughly two dozen states adopted o… 0 Federal omnibus AI statutes Congress has yet to pass a compreh… peopleofinternet.com

Key Takeaways

As of May 2026, the United States stands at a regulatory crossroads on artificial intelligence. With the Colorado AI Act now in force, Texas's Responsible Artificial Intelligence Governance Act on the books, and California's Frontier AI Transparency Act phasing in, developers face a fragmenting compliance landscape that threatens both innovation and the very accountability outcomes lawmakers are pursuing.

The picture is striking. According to the National Conference of State Legislatures, every state, Puerto Rico, the Virgin Islands and the District of Columbia introduced AI-related bills in the 2025 sessions, with hundreds of measures tracked and dozens enacted. Each new statute adds definitions, audit obligations and disclosure regimes that rarely align with one another. For startups outside Big Tech, navigating fifty regulatory regimes is not a feature of federalism; it is an entry barrier.

The Federal Vacuum

The Trump administration's January 2025 Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence," rescinded the Biden-era EO 14110 and pivoted toward a deregulatory posture. The follow-on America's AI Action Plan, released in July 2025, emphasizes accelerating AI infrastructure, expanding compute capacity, and removing impediments to model deployment. The plan correctly identifies that overcautious regulation risks ceding leadership to competitors abroad.

Yet a deregulatory posture at the federal level does not fill the federal vacuum — it deepens it. When Washington steps back without preempting state action, fifty laboratories of democracy become fifty compliance jurisdictions. The proposed multi-year moratorium on state AI laws floated during 2025 budget negotiations was a blunt instrument, but the underlying instinct was right: AI is an interstate-commerce problem demanding an interstate-commerce answer.

State Laws Multiply, Definitions Diverge

Consider the variation. The Colorado AI Act (SB 24-205), which took effect on February 1, 2026, regulates "high-risk AI systems" that make "consequential decisions" in employment, lending, housing and other domains, requiring impact assessments and consumer notices. Texas's HB 149, signed in June 2025, takes a narrower posture focused on intentional misuse and government deployments while preserving a regulatory sandbox for developers. California's SB 53, the Frontier AI Transparency Act, targets only the largest frontier developers with transparency and safety-incident reporting duties.

These approaches are not obviously incompatible — but they are not interoperable. A mid-sized HR-tech vendor selling résumé-screening software in Denver, Austin and San Francisco must reverse-engineer compliance from three different statutes, three different definitions of "AI system," and three different enforcement regimes. Multiply by the dozens of bills moving in Albany, Hartford and Springfield, and the result is litigation risk that scales nonlinearly with footprint.

What Proportionate Federal Action Looks Like

The case for federal action is not the case for heavy-handed federal action. The NIST AI Risk Management Framework, released in 2023 and extended in 2024 with its Generative AI Profile, already provides a voluntary, risk-tiered architecture that industry has broadly adopted. Codifying NIST-style risk tiers into a federal floor — preempting conflicting state requirements while preserving genuine consumer-protection laws — would deliver three benefits at once.

Avoiding the Two Failure Modes

American AI policy faces two failure modes, and Washington is currently flirting with both. The first is regulatory capture by incumbents — using compliance moats to lock out competitors. SB 1047, vetoed by Governor Newsom in September 2024, illustrated the risk: well-intentioned safety mandates that disproportionately burdened open-weight developers and startups while leaving the largest labs largely unaffected.

The second failure mode is libertarian drift, where the absence of any rules invites populist backlash that ultimately produces worse rules. The Colorado statute is in part a response to perceived federal inaction. Texas's TRAIGA emerged from the same impulse, even as it took a more developer-friendly form. The longer Congress waits, the more state legislators will fill the void — and the harder federal preemption becomes politically.

The choice is not between innovation and accountability. It is between coherent, proportionate rules that deliver both, and a fragmented patchwork that delivers neither.

The Path Forward

Congress should pursue a narrowly scoped federal AI framework grounded in three principles. First, risk-tiered obligations: light-touch transparency for general-purpose systems, meaningful obligations only for high-risk consequential decisions, and frontier-specific duties calibrated to compute or capability thresholds. Second, sectoral integration: existing regulators — the EEOC for employment, the CFPB for lending, the FDA for medical devices — should lead within their domains rather than cede ground to a horizontal AI super-agency. Third, preemption with carve-outs: a federal floor that preempts conflicting state AI-specific mandates while preserving baseline consumer-protection, civil-rights and tort law.

This is the proportionate, evidence-based path. It rejects both the fantasy that AI is so exceptional it demands a brand-new regulatory edifice and the fantasy that no rules at all are politically sustainable. It treats American AI leadership not as something to be regulated into existence, but as something worth protecting from the soft tyranny of incoherent rules.

Sources & Citations

  1. Colorado AI Act (SB 24-205)
  2. Executive Order 14179: Removing Barriers to American Leadership in AI (Jan 2025)
  3. America's AI Action Plan (July 2025)
  4. Texas HB 149 — Responsible Artificial Intelligence Governance Act
  5. California SB 53 — Frontier AI Transparency Act
  6. NIST AI Risk Management Framework
  7. NCSL Artificial Intelligence 2025 Legislation Tracker
Share this analysis: