Brazil intermediary liability

Brazil's Supreme Court Rewrites Marco Civil: The End of the Court-Order Shield

The STF's narrowing of Article 19 trades legal certainty for ad-hoc moderation duties — and leaves platforms guessing what counts as 'manifestly illegal.'

Brazil's Marco Civil After the STF Ruling People of Internet Research · Brazil 2014 Marco Civil enacted Law 12.965 established the court-o… 5+ New extrajudicial categories Terrorism, CSAM, gender violence, … Stalled Fake News Bill status PL 2630/2020 failed to pass before… ~180M Brazilian internet users Among the largest social media mar… peopleofinternet.com

Key Takeaways

For more than a decade, Article 19 of Brazil's Marco Civil da Internet (Law 12.965/2014) was the quiet anchor of Latin America's most liberal intermediary-liability regime. It was simple, predictable, and — by global standards — unusually protective of online speech: platforms could not be held civilly liable for user content unless they ignored a specific judicial order to take it down. In June 2025, in the joined cases RE 1037396 (Theme 533, rapporteur Justice Dias Toffoli) and RE 1057258 (Theme 987, rapporteur Justice Luiz Fux), the Supreme Federal Tribunal (STF) substantially narrowed that rule. Implementation is now rolling through 2025–2026, and the consequences for Brazil's information ecosystem are only beginning to materialise.

What the STF actually held

The Court did not strike Article 19 down. Instead, it carved out categories of content for which platforms must act on extrajudicial notices — private complaints from users, civil society, or public authorities — and can be held liable if they fail to remove material that is “manifestly illegal.” The categories named in the ruling cluster around what the justices framed as threats to democracy and to vulnerable people: terrorism and incitement to terrorism, child sexual abuse material, gender-based violence, racism and other hate crimes, and — most contested — content that incites violence against democratic institutions or promotes coups.

For everything else — defamation, ordinary disputes between users, political speech that does not cross those lines — the old Article 19 rule still applies. Removal liability requires a court order. The result is a two-track liability regime grafted onto a statute designed to be uniform.

Why this is not a clean win for “safety”

It is tempting to read the decision as a long-overdue update to a 2014 law written before TikTok existed and before the January 8, 2023 assault on the Praça dos Três Poderes. There is a real problem the Court was trying to solve. But the architecture it chose raises three structural concerns that any pro-innovation, proportionate-regulation framework should take seriously.

1. “Manifestly illegal” is doing too much work

The phrase is borrowed from European doctrine, where it has been litigated for years and still produces inconsistent outcomes. In Brazil it now governs civil liability across a fragmented federal judiciary with no central content-moderation regulator. A post that one trust-and-safety reviewer in São Paulo considers borderline political hyperbole, another in Brasília may see as “anti-democratic incitement.” The cost of guessing wrong is uncapped damages. The rational response is well-known from years of empirical work on notice-and-takedown regimes: over-removal. Lawful but uncomfortable speech — protest organising, satire, journalism about public officials — gets swept up in the precaution.

2. The wrong branch wrote the rule

Marco Civil was the product of a multi-year, multi-stakeholder legislative process that consciously rejected notice-and-takedown in favour of a judicial-order model. Reversing that choice through constitutional adjudication, while Congress is still actively debating intermediary liability after the collapse of PL 2630/2020 (the “Fake News Bill”), short-circuits democratic deliberation on a question that involves trade-offs reasonable people disagree about. A statute can be amended, sunsetted, paired with due-process safeguards, or scoped to specific harms. A constitutional reinterpretation is far harder to dial back if it produces unintended chilling effects.

3. Smaller platforms pay the highest price

Meta, Google, X and TikTok have the legal teams, the Portuguese-language reviewers, and the appeals infrastructure to absorb a notice-and-takedown duty. A Brazilian startup, a Mastodon instance hosted in Curitiba, a community forum for a favela cultural association — they do not. Liability rules that look reasonable when imagined against the “Big Four” become an entry barrier when applied to the long tail. Brazil's vibrant domestic tech sector — fintechs, creator platforms, federated networks — depends on the predictability Article 19 once supplied.

What Congress should do now

The healthy response is not to relitigate the STF's decision but to legislate around it with the precision the Court could not. A workable framework would:

The global signal

Brazil matters far beyond its borders. Marco Civil was studied and partly emulated from Mexico to South Africa. The STF's retreat from its protective core will be cited by every government in the Global South that wants to compel faster takedowns without the inconvenience of a court. That is the deeper risk: not what the ruling does inside Brazil, but the example it sets for jurisdictions with weaker judiciaries and stronger appetites for political content control. A proportionate, codified, narrowly-scoped legislative response from Brasília would do more than fix a domestic problem. It would offer the rest of the region a template that protects users from genuinely dangerous content without dismantling the architecture that made the open internet possible in the first place.

Sources & Citations

  1. Marco Civil da Internet (Law 12.965/2014) — Planalto
  2. Supremo Tribunal Federal — institutional portal
  3. EU Digital Services Act — official text
Share this analysis: