US section 230

Section 230 at 30: Why America Needs a Scalpel, Not a Sledgehammer

As Congress revisits Section 230 reform in 2026, lawmakers should target specific harms without dismantling the legal architecture that built the modern internet.

Section 230 Reform by the Numbers People of Internet Research · US 26 words Statute length The core liability shield in Secti… 30+ Reform bills introduced Section 230 amendment bills introd… 3 Supreme Court rulings Major SCOTUS decisions since 2023 … 30 Years in force Section 230 was enacted in 1996 an… peopleofinternet.com

Key Takeaways

Section 230 of the Communications Decency Act turns thirty next year, and Washington is once again reaching for the rewrite button. In the past six weeks alone, three new bills have landed in committee, the Senate Commerce Committee has scheduled a fourth hearing on intermediary liability, and the White House has signaled openness to "recalibration." The mood is bipartisan, the rhetoric is sweeping, and the risk to the open internet is real.

People of Internet has consistently argued that Section 230 is neither sacrosanct nor expendable. It is a 26-word statute that does enormous work: it allows platforms — from Reddit subforums to Substack newsletters to your local Little League's message board — to host user speech without facing ruinous litigation over every comment. The question is not whether to reform it. The question is whether reform will be surgical or destructive.

What the Current Debate Gets Right

The legitimate grievances driving reform are not imaginary. AI-generated non-consensual intimate imagery has exploded since 2024. Algorithmic amplification of self-harm content targeting minors continues to surface in plaintiffs' filings. Foreign influence operations exploit recommendation systems faster than platforms can respond. Voters across the political spectrum agree something must change.

Congress has taken note. The TAKE IT DOWN Act, signed in 2025, created a narrowly tailored notice-and-removal regime for non-consensual intimate imagery — including AI-generated deepfakes — without touching Section 230's core liability shield. That is the right model: identify a specific harm, define it precisely, and impose duties calibrated to it. The Kids Online Safety Act, which cleared the Senate in 2024 and has been reintroduced in modified form, follows a similar logic by focusing on design choices rather than speech itself.

Where Proposed Reforms Go Wrong

Other proposals on the table are far less careful. The latest iteration of the SAFE TECH Act would strip immunity any time a platform receives "payment" for hosting content — a definition broad enough to capture every ad-supported service on the internet. A competing House bill would condition immunity on "reasonable" content moderation, inviting courts to second-guess every editorial choice and effectively federalizing speech policy through tort litigation.

Both approaches misread the Supreme Court's recent jurisprudence. In Moody v. NetChoice (2024), the Court reaffirmed that platforms exercise editorial discretion protected by the First Amendment when they curate user content. In Gonzalez v. Google (2023), the Court declined the invitation to narrow Section 230 around algorithmic recommendations, recognizing that ranking and surfacing decisions are inseparable from publication itself. A reform package that punishes platforms for moderating, or for using algorithms at all, would collide head-on with both rulings.

The Small-Platform Problem

The most under-discussed casualty of broad Section 230 rollback would be the long tail of small and mid-sized platforms. Google and Meta can absorb a litigation tax. A two-person startup launching a niche community cannot. Research from the Information Technology and Innovation Foundation has consistently found that intermediary liability costs fall disproportionately on entrants, entrenching incumbents rather than disciplining them.

This matters because the most plausible answer to platform concentration is more competition, not less. Decentralized protocols like ActivityPub and AT Protocol, federated services like Mastodon and Bluesky, and self-hosted forums all depend on the same liability shield that lets a Discord server operator sleep at night. Repeal Section 230 and you do not break up Big Tech — you cement it.

A Proportionate Reform Agenda

The path forward is not difficult to describe, only difficult to legislate. Three principles should guide any 2026 reform package:

What the Next Six Months Should Look Like

If Congress wants a serious reform effort, it should commission an updated Congressional Research Service review of post-Moody case law, fund the Federal Trade Commission to study how liability rules affect competition among small platforms, and hold hearings that include representatives from federated and open-source projects — not just the largest incumbents whose interests often diverge from the broader ecosystem's.

Section 230 is not a subsidy to Silicon Valley. It is a load-bearing beam in the architecture of online speech, commerce, and civic life. The harms driving today's reform calls are real, and they deserve real responses. But the worst outcome would be a sweeping rewrite that solves none of the actual problems while breaking the parts of the internet that still work. Thirty years on, the case for proportionate, evidence-based reform — and against wholesale repeal — has only grown stronger.

Sources & Citations

  1. 47 U.S.C. § 230 — Statutory text
  2. Moody v. NetChoice, LLC (2024) — Supreme Court opinion
  3. Gonzalez v. Google LLC (2023) — Supreme Court opinion
  4. Congressional Research Service: Section 230 — An Overview (R46751)
  5. ITIF analysis on Section 230 and competition
  6. TAKE IT DOWN Act — Public Law 119-12 (2025)
Share this analysis: