Nearly two years after the U.S. Court of Appeals for the Third Circuit issued its decision in Anderson v. TikTok, Inc., the ruling continues to reverberate through American intermediary liability doctrine. The August 2024 opinion held that Section 230 of the Communications Decency Act does not shield a platform when its own recommendation algorithm surfaces harmful content — a position that breaks sharply with decades of precedent and that has emboldened a wave of product-liability claims now working their way through federal courts.
For a regulatory regime that has underwritten the open American internet since 1996, the implications are profound. They also demand a careful, proportionate response — not a panicked rewriting of the rules that made the U.S. the world's dominant platform economy.
What Anderson Actually Held
The case arose from the death of Nylah Anderson, a ten-year-old who died after attempting a viral asphyxiation "challenge" that her TikTok For You Page had algorithmically surfaced. A federal district court in Pennsylvania dismissed the suit on Section 230 grounds, applying the long-standing rule that platforms cannot be treated as publishers of third-party content.
The Third Circuit reversed. Writing for the panel, Judge Patty Shwartz reasoned that TikTok's algorithmic curation was the platform's own "expressive activity" — and therefore first-party speech that falls outside Section 230's protection. The panel leaned heavily on the U.S. Supreme Court's 2024 decision in Moody v. NetChoice, which characterized content-moderation and curation decisions by large platforms as protected editorial judgment under the First Amendment.
The logic cuts both ways, and that is precisely the problem. If a platform's algorithmic ranking is constitutionally protected speech, the Third Circuit reasoned, then it is also the platform's own speech for purposes of tort liability. Judge Paul Matey's concurrence went further, arguing for a substantial narrowing of Section 230 across the board.
A Circuit Split — and a Litigation Surge
The decision puts the Third Circuit at odds with prior rulings from the Second and Ninth Circuits, which had largely held that recommendation algorithms remain within Section 230's safe harbor. That split is exactly the kind of disagreement the Supreme Court typically resolves; TikTok's reported petition for certiorari, filed after the Third Circuit denied en banc rehearing, remains a focal point for the industry.
In the meantime, plaintiffs' lawyers have not waited. The multidistrict litigation consolidated in the Northern District of California — In re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation — has expanded, with plaintiffs reframing claims as challenges to platform design rather than to third-party content. State attorneys general, including the bipartisan coalition that filed against Meta in 2023, have leaned into similar product-liability framings.
The strategic shift is unmistakable. Where pre-Anderson complaints often stumbled over Section 230 motions to dismiss, post-Anderson complaints survive long enough to reach discovery — a stage that, in practice, frequently forces settlement regardless of the underlying merits.
Why Proportionality Matters
None of this is to dismiss the underlying harms. The tragedy in Anderson is real, and platforms that design recommendation systems for engagement bear genuine responsibility for how those systems shape user experience — particularly for minors. The question is not whether platforms should be accountable, but how.
Three concerns counsel against treating Anderson as a license to dismantle intermediary liability:
- Compliance costs fall hardest on smaller platforms. TikTok, Meta, and YouTube can absorb the discovery burden of mass tort litigation. The next generation of competitors — the platforms that will challenge today's incumbents — cannot. A regime that effectively requires every recommendation system to be litigation-tested is a regime that locks in the dominant players.
- Algorithmic curation is not the same as algorithmic targeting. The Third Circuit's framing risks conflating any ranking decision with affirmative endorsement. Search engines, news aggregators, and even spam filters rely on the same basic architecture. A doctrine that treats all of them as first-party speakers would reach far beyond TikTok.
- The First Amendment is in tension with itself. Moody protects platforms' editorial choices against state interference. Anderson uses that same characterization to expose platforms to private liability. Courts will need to reconcile these strands; legislatures should not pre-empt that work with sweeping statutory rewrites.
A Path Forward
A proportionate response would focus on the narrow set of cases where platform design — particularly design directed at minors — creates foreseeable, serious harm. That is the terrain Congress contemplated in proposals like the Kids Online Safety Act, which has advanced in the Senate in various forms and which targets design choices rather than third-party content.
Equally important is what policymakers should not do. Wholesale repeal of Section 230, of the kind floated periodically on both ends of the political spectrum, would not produce more responsible platforms. It would produce platforms that either moderate aggressively to avoid liability — chilling lawful speech — or moderate nothing at all, retreating to a pre-1996 "distributor" posture. Neither serves users.
The Third Circuit has reopened a question Congress thought it had answered. The Supreme Court, if it takes up Anderson, will have a chance to draw a more workable line. Until it does, the priority for U.S. policymakers should be restraint: let the courts work through the doctrinal puzzle, target genuine design harms through narrowly tailored statutes, and resist the temptation to treat every tragedy as a reason to rewrite the law that built the open American internet.