For more than a decade, Article 19 of Brazil's Marco Civil da Internet (Law 12.965/2014) served as the cornerstone of the country's intermediary liability regime. Under its plain text, application providers could only be held civilly liable for third-party content if, after a specific judicial order, they failed to take down the offending material. It was a deliberately speech-protective design — closer in spirit to Section 230 of the U.S. Communications Decency Act than to Europe's notice-and-action models — and it underwrote the explosion of Brazilian platforms, creator economies, and small-business commerce that followed.
That framework has now been substantially rewritten. In a landmark decision concluding its long-running review of Article 19's constitutionality (Themes 533 and 987 in the Court's general repercussion docket), the Supremo Tribunal Federal (STF) ruled that the article, as written, fails to adequately protect fundamental rights when applied across the board. The Court left the judicial-order rule intact as the default — but carved out broad categories where platforms must act on notice, and in some cases proactively, or face direct civil liability.
What the Court Actually Changed
Read carefully, the STF's decision is not a wholesale repeal of Article 19. It is a layered regime that introduces three tiers of platform responsibility:
- Court-order tier (preserved): For ordinary disputes — defamation between private parties, contractual disagreements, most user-on-user grievances — the original Article 19 standard survives. Platforms remain shielded until a judge orders removal.
- Notice-and-takedown tier (expanded): For a defined list of serious unlawful content, platforms must act on extrajudicial notice. This covers categories such as content attacking democratic institutions and the rule of law, incitement to coups, terrorism, racism and other hate crimes, gender-based violence, and incitement to suicide or self-harm.
- Proactive-duty tier (new): For child sexual abuse material (CSAM) and certain other egregious categories, the Court signaled that platforms have a duty of care to detect and prevent dissemination — not merely respond to flags.
The ruling also carries procedural innovations: a presumption of platform responsibility in cases involving paid advertising or algorithmically amplified content that falls into the expanded categories, and stronger transparency and due-process obligations for content moderation decisions.
Why the Court Moved — and Why Reasonable People Disagreed
The political backdrop is impossible to ignore. The January 8, 2023 storming of Brazil's Three Powers Plaza — the Praça dos Três Poderes attack on the Congress, Supreme Court, and Planalto Palace — sharpened a debate that had already been building since the 2018 and 2022 election cycles. Justices repeatedly invoked the role of platforms in coordinating offline violence and amplifying anti-democratic content as motivation for revisiting Article 19.
There is a serious case for differential treatment of categorically illegal material — CSAM most obviously, where the harm is irreparable and the legal status unambiguous globally. The international trend is in this direction: the EU's Digital Services Act imposes systemic-risk obligations on Very Large Online Platforms; the UK Online Safety Act 2023 creates specific duties around illegal content; even Section 230's U.S. defenders acknowledge narrow carve-outs (FOSTA-SESTA, the proposed STOP CSAM Act).
The Innovation Cost the Court Underweighted
But the STF's ruling goes considerably further than those models, and the gap matters. Three risks deserve a clear-eyed assessment:
First, the over-removal problem. Notice-based liability without strong procedural safeguards creates powerful incentives for platforms to remove first and ask questions later. "Content attacking democratic institutions" is a category that, in good faith, can include sharp political criticism, satire, and investigative journalism. Brazilian civil society organizations including the Instituto de Tecnologia e Sociedade (ITS Rio) and InternetLab have warned for years that broad notice-and-takedown regimes have measurable chilling effects on legitimate expression.
Second, the compliance asymmetry. Global platforms have legal teams, content classifiers, and moderation budgets to absorb new obligations. Brazilian startups — the very Magazine Luiza-era success stories Marco Civil helped enable — do not. A regime that nominally applies to all "application providers" but in practice is only navigable by the largest players entrenches incumbents and raises the barrier to homegrown competition.
Third, the rule-of-law value of judicial review. Article 19's original design forced contested removal decisions through a court — slower, but legitimate. Shifting that determination to platform trust-and-safety teams operating under threat of liability outsources a fundamentally adjudicative function to private actors with strong incentives to err toward suppression.
A Better Path Forward
The STF has spoken, and Marco Civil's safe harbor as Brazilians knew it is gone. What Congress and regulators do next will determine whether this becomes a proportionate update or a slow erosion of Brazil's open-internet model. A few principles should guide implementation:
- Narrow, clearly enumerated categories of unlawful content — not open-ended terms susceptible to political redefinition.
- Robust counter-notice and reinstatement procedures, so wrongly removed speech can be restored quickly.
- Expedited judicial review channels for contested categories, preserving the rule-of-law check.
- Proportionate obligations scaled to platform size and risk profile, in line with the DSA's tiered approach.
- Transparency reporting requirements that allow civil society to measure over-removal, not just compliance.
Brazil pioneered a thoughtful intermediary liability framework in 2014. The challenge now is to update it for the platform economy of 2026 without surrendering the principles that made it work.