For nearly a decade, American policymakers have wrestled with a deceptively simple question: can the state require online platforms to take down speech it deems false? This week, federal courts gave the clearest answer yet — and it is a resounding no. The rulings in Kohls v. Bonta, which struck down California's AB 2655 (the Defending Democracy from Deepfake Deception of California Elections Act) and AB 2839 on First Amendment grounds, have left the nation's most aggressive state-level misinformation takedown regime in tatters.
Combined with the Trump administration's January 2025 executive order Restoring Freedom of Speech and Ending Federal Censorship, which dismantled federal jawboning channels between agencies and platforms, the legal terrain has fundamentally shifted. Speech-restrictive responses to online falsehoods are now running into a constitutional wall — and that is, on balance, good news for the open internet.
What the California laws tried to do
Signed by Governor Gavin Newsom in September 2024, AB 2655 required large online platforms to block or label "materially deceptive" AI-generated content related to elections during defined pre- and post-election windows. AB 2839 went further, exposing creators and distributors of deceptive election content to civil liability — including injunctions and damages — for a sweeping range of altered media.
The laws were pitched as narrow, surgical responses to the rise of generative AI in political campaigns. In practice, they handed state actors and private plaintiffs an extraordinarily broad mandate to police political speech. Christopher Kohls — a satirist who posts political parody videos under the handle "Mr Reagan" — challenged AB 2839 almost immediately. In October 2024, U.S. District Judge John Mendez of the Eastern District of California issued a preliminary injunction, writing that the statute "acts as a hammer instead of a scalpel" and "serves as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas."
The subsequent rulings now reaching the Ninth Circuit have only deepened that analysis. Courts have found that the laws regulate based on viewpoint and content, fail strict scrutiny, and conflict with Section 230 of the Communications Decency Act, which preempts state laws imposing publisher-like liability on interactive computer services.
The Murthy precedent and the federal pullback
The state-level setbacks did not emerge in a vacuum. In Murthy v. Missouri (2024), the Supreme Court declined to issue a sweeping ruling on government-platform coordination but signaled deep skepticism of informal pressure campaigns. Lower courts have since been considerably more receptive to First Amendment challenges against "jawboning" — and the political branches have responded accordingly.
President Trump's January 20, 2025 executive order directed federal agencies to halt coordination with social media companies on content moderation, audit prior conduct of the Cybersecurity and Infrastructure Security Agency (CISA) and the State Department's now-defunct Global Engagement Center, and bar federal employees from "infringing on the constitutionally protected free speech rights of any American citizen." Whatever one's view of the order's politics, the practical effect is to end an era in which executive branch officials could quietly nudge platforms toward removals.
Why the pro-innovation case for this shift is strong
It is tempting to view the demise of these statutes as a defeat for democratic integrity. The better reading is that it preserves the conditions under which democratic discourse — and the platforms that host it — can actually function.
- Compliance scaling problems. Mandatory takedown windows would have forced platforms to deploy aggressive classifier-driven removals during election seasons, with predictably high false-positive rates affecting satire, journalism, and opposition speech.
- Chilling effects on small developers. The statutes' definitions of "materially deceptive AI-generated content" were broad enough to ensnare open-source tool makers, independent creators, and academic researchers. Larger incumbents could afford the lawyers; smaller innovators could not.
- Patchwork incoherence. A 50-state quilt of conflicting deepfake rules — Texas's HB 18, Minnesota's now-enjoined 2023 law, and California's now-struck statutes each define harm differently — would have made nationwide product launches functionally impossible.
Better tools exist
None of this means doing nothing about AI-generated political deception. But the more proportionate, durable responses lie elsewhere:
- Transparency and provenance. Industry initiatives like the Coalition for Content Provenance and Authenticity (C2PA) and watermarking work by OpenAI, Google, and Meta address the same problem without compelling speech removal.
- Existing tort law. Defamation, false light, and deceptive trade practices statutes already reach the most harmful synthetic content, with the procedural protections the First Amendment requires.
- Counter-speech and media literacy. The Stanford Internet Observatory and Knight First Amendment Institute have documented that rapid, credible corrections — not removals — are the most effective response to viral misinformation.
What comes next
Appeals in Kohls v. Bonta will move through the Ninth Circuit over the coming months, but the trajectory is clear. State legislatures that have been queuing up copycat statutes — bills are pending in New York, Massachusetts, and Washington — should take this as the courts' polite but firm signal to find a different lane. So should the EU's enforcement of the Digital Services Act, which is approaching its own First Amendment-adjacent reckoning in transatlantic disputes.
The First Amendment has, once again, done what it was designed to do: prevent the government from picking winners and losers in political speech, even when the rationale sounds compelling and the technology looks new. The next generation of policy responses to AI-era misinformation will have to be built on that foundation, not against it.