Section 230 of the Communications Decency Act turns thirty next year, and Washington is once again reaching for the rewrite button. In the past six weeks alone, three new bills have landed in committee, the Senate Commerce Committee has scheduled a fourth hearing on intermediary liability, and the White House has signaled openness to "recalibration." The mood is bipartisan, the rhetoric is sweeping, and the risk to the open internet is real.
People of Internet has consistently argued that Section 230 is neither sacrosanct nor expendable. It is a 26-word statute that does enormous work: it allows platforms — from Reddit subforums to Substack newsletters to your local Little League's message board — to host user speech without facing ruinous litigation over every comment. The question is not whether to reform it. The question is whether reform will be surgical or destructive.
What the Current Debate Gets Right
The legitimate grievances driving reform are not imaginary. AI-generated non-consensual intimate imagery has exploded since 2024. Algorithmic amplification of self-harm content targeting minors continues to surface in plaintiffs' filings. Foreign influence operations exploit recommendation systems faster than platforms can respond. Voters across the political spectrum agree something must change.
Congress has taken note. The TAKE IT DOWN Act, signed in 2025, created a narrowly tailored notice-and-removal regime for non-consensual intimate imagery — including AI-generated deepfakes — without touching Section 230's core liability shield. That is the right model: identify a specific harm, define it precisely, and impose duties calibrated to it. The Kids Online Safety Act, which cleared the Senate in 2024 and has been reintroduced in modified form, follows a similar logic by focusing on design choices rather than speech itself.
Where Proposed Reforms Go Wrong
Other proposals on the table are far less careful. The latest iteration of the SAFE TECH Act would strip immunity any time a platform receives "payment" for hosting content — a definition broad enough to capture every ad-supported service on the internet. A competing House bill would condition immunity on "reasonable" content moderation, inviting courts to second-guess every editorial choice and effectively federalizing speech policy through tort litigation.
Both approaches misread the Supreme Court's recent jurisprudence. In Moody v. NetChoice (2024), the Court reaffirmed that platforms exercise editorial discretion protected by the First Amendment when they curate user content. In Gonzalez v. Google (2023), the Court declined the invitation to narrow Section 230 around algorithmic recommendations, recognizing that ranking and surfacing decisions are inseparable from publication itself. A reform package that punishes platforms for moderating, or for using algorithms at all, would collide head-on with both rulings.
The Small-Platform Problem
The most under-discussed casualty of broad Section 230 rollback would be the long tail of small and mid-sized platforms. Google and Meta can absorb a litigation tax. A two-person startup launching a niche community cannot. Research from the Information Technology and Innovation Foundation has consistently found that intermediary liability costs fall disproportionately on entrants, entrenching incumbents rather than disciplining them.
This matters because the most plausible answer to platform concentration is more competition, not less. Decentralized protocols like ActivityPub and AT Protocol, federated services like Mastodon and Bluesky, and self-hosted forums all depend on the same liability shield that lets a Discord server operator sleep at night. Repeal Section 230 and you do not break up Big Tech — you cement it.
A Proportionate Reform Agenda
The path forward is not difficult to describe, only difficult to legislate. Three principles should guide any 2026 reform package:
- Target specific harms, not generic "bad content." Non-consensual intimate imagery, child sexual abuse material, and clear incitement already sit outside Section 230's scope or are handled by carve-outs. Future carve-outs should be similarly narrow and constitutionally defensible.
- Regulate process, not viewpoint. Transparency requirements — covering moderation policies, appeals, researcher access, and algorithmic disclosures — improve accountability without dictating what platforms must allow or remove. The EU's Digital Services Act offers a flawed but instructive template; America can do better by avoiding its prescriptive content rules.
- Preserve the litigation shield for ordinary moderation. Section 230(c)(1) and (c)(2) work in tandem to let platforms host and curate without facing endless lawsuits over individual decisions. That structure should survive any reform.
What the Next Six Months Should Look Like
If Congress wants a serious reform effort, it should commission an updated Congressional Research Service review of post-Moody case law, fund the Federal Trade Commission to study how liability rules affect competition among small platforms, and hold hearings that include representatives from federated and open-source projects — not just the largest incumbents whose interests often diverge from the broader ecosystem's.
Section 230 is not a subsidy to Silicon Valley. It is a load-bearing beam in the architecture of online speech, commerce, and civic life. The harms driving today's reform calls are real, and they deserve real responses. But the worst outcome would be a sweeping rewrite that solves none of the actual problems while breaking the parts of the internet that still work. Thirty years on, the case for proportionate, evidence-based reform — and against wholesale repeal — has only grown stronger.