US misinformation takedown laws

California's Deepfake Laws Fall: How Kohls v. Bonta Reset America's Misinformation Debate

Federal courts blocked AB 2655 and AB 2839 on First Amendment grounds, marking a decisive shift away from state-mandated content removal.

The Collapse of State Misinformation Mandates People of Internet Research · US 2 California laws struck down AB 2655 and AB 2839 both blocked o… 2024 Year of Kohls injunction Judge Mendez issued preliminary in… Jan 2025 Federal censorship EO date Trump order ended agency-platform … 47 USC Section 230 protection Preempts state-imposed publisher l… peopleofinternet.com

Key Takeaways

For nearly a decade, American policymakers have wrestled with a deceptively simple question: can the state require online platforms to take down speech it deems false? This week, federal courts gave the clearest answer yet — and it is a resounding no. The rulings in Kohls v. Bonta, which struck down California's AB 2655 (the Defending Democracy from Deepfake Deception of California Elections Act) and AB 2839 on First Amendment grounds, have left the nation's most aggressive state-level misinformation takedown regime in tatters.

Combined with the Trump administration's January 2025 executive order Restoring Freedom of Speech and Ending Federal Censorship, which dismantled federal jawboning channels between agencies and platforms, the legal terrain has fundamentally shifted. Speech-restrictive responses to online falsehoods are now running into a constitutional wall — and that is, on balance, good news for the open internet.

What the California laws tried to do

Signed by Governor Gavin Newsom in September 2024, AB 2655 required large online platforms to block or label "materially deceptive" AI-generated content related to elections during defined pre- and post-election windows. AB 2839 went further, exposing creators and distributors of deceptive election content to civil liability — including injunctions and damages — for a sweeping range of altered media.

The laws were pitched as narrow, surgical responses to the rise of generative AI in political campaigns. In practice, they handed state actors and private plaintiffs an extraordinarily broad mandate to police political speech. Christopher Kohls — a satirist who posts political parody videos under the handle "Mr Reagan" — challenged AB 2839 almost immediately. In October 2024, U.S. District Judge John Mendez of the Eastern District of California issued a preliminary injunction, writing that the statute "acts as a hammer instead of a scalpel" and "serves as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas."

The subsequent rulings now reaching the Ninth Circuit have only deepened that analysis. Courts have found that the laws regulate based on viewpoint and content, fail strict scrutiny, and conflict with Section 230 of the Communications Decency Act, which preempts state laws imposing publisher-like liability on interactive computer services.

The Murthy precedent and the federal pullback

The state-level setbacks did not emerge in a vacuum. In Murthy v. Missouri (2024), the Supreme Court declined to issue a sweeping ruling on government-platform coordination but signaled deep skepticism of informal pressure campaigns. Lower courts have since been considerably more receptive to First Amendment challenges against "jawboning" — and the political branches have responded accordingly.

President Trump's January 20, 2025 executive order directed federal agencies to halt coordination with social media companies on content moderation, audit prior conduct of the Cybersecurity and Infrastructure Security Agency (CISA) and the State Department's now-defunct Global Engagement Center, and bar federal employees from "infringing on the constitutionally protected free speech rights of any American citizen." Whatever one's view of the order's politics, the practical effect is to end an era in which executive branch officials could quietly nudge platforms toward removals.

Why the pro-innovation case for this shift is strong

It is tempting to view the demise of these statutes as a defeat for democratic integrity. The better reading is that it preserves the conditions under which democratic discourse — and the platforms that host it — can actually function.

Better tools exist

None of this means doing nothing about AI-generated political deception. But the more proportionate, durable responses lie elsewhere:

What comes next

Appeals in Kohls v. Bonta will move through the Ninth Circuit over the coming months, but the trajectory is clear. State legislatures that have been queuing up copycat statutes — bills are pending in New York, Massachusetts, and Washington — should take this as the courts' polite but firm signal to find a different lane. So should the EU's enforcement of the Digital Services Act, which is approaching its own First Amendment-adjacent reckoning in transatlantic disputes.

The First Amendment has, once again, done what it was designed to do: prevent the government from picking winners and losers in political speech, even when the rationale sounds compelling and the technology looks new. The next generation of policy responses to AI-era misinformation will have to be built on that foundation, not against it.

Sources & Citations

  1. California AB 2655 — Defending Democracy from Deepfake Deception Act
  2. California AB 2839 — Elections: Deceptive Media in Advertisements
  3. Murthy v. Missouri — Supreme Court Opinion
  4. Executive Order: Restoring Freedom of Speech and Ending Federal Censorship
  5. Section 230 of the Communications Decency Act
Share this analysis: