For trust-and-safety leads sitting in São Paulo, Singapore is suddenly closer than the map suggests. The compliance teams at TikTok, Meta, Google and X are quietly drawing organisational diagrams that map two parallel obligations: the directives that can land in their inbox from Singapore's Infocomm Media Development Authority (IMDA), and the notice-driven, court-mediated duties that Brazil is rapidly assembling in the wake of the Supreme Federal Tribunal's (STF) recent rewrite of Article 19 of the Marco Civil da Internet. The two regimes are converging on the same outcome — faster takedowns of the worst online content — but through structurally very different machinery. That gap is becoming a live conflict-of-laws problem.
The Singapore baseline
Singapore's Code of Practice for Online Safety, in force since July 2023 under amendments to the Broadcasting Act, designates a handful of large platforms — TikTok (operated regionally by ByteDance Pte Ltd, headquartered in Singapore), Meta's Facebook and Instagram, Google's YouTube, X, and local forum HardwareZone — as "regulated online communication services." These platforms must publish community standards, provide user reporting and appeal tools, and produce annual safety reports to IMDA. So far, so familiar to anyone who has read the EU's Digital Services Act.
What sets Singapore apart is the Online Criminal Harms Act (OCHA), passed in 2023 and rolled out in stages from February 2024. OCHA gives officials administrative power to issue disabling-access, stop communication, and account restriction directives where there is a suspicion of criminal harm — scams, sexual content involving minors, terrorism-related material, and a defined list of other harms. Directives can be issued in hours, not weeks. Non-compliance is a criminal offence carrying fines and, for repeat offenders, the prospect of access-blocking orders against the platform itself.
Whatever one thinks of the speed, it is at least a defined speed, with a published threshold of harm and an appeal route to a reviewing tribunal.
Brazil's emerging stack
Brazil is travelling in the same direction without quite the same map. Two developments matter:
- The STF's reinterpretation of Article 19. The Court has effectively moved Brazil away from a pure judicial-order-only liability regime toward a notice-based system of "heightened duties of care" for serious categories of unlawful content. Platforms can now be liable if they fail to act after sufficiently clear extrajudicial notification of content such as terrorism, incitement to violence, child sexual abuse material, or attacks on democratic institutions.
- PL 2630 — the so-called "Fake News Bill." Long stalled in Congress, the bill in its most recent iterations proposes a dedicated online-safety regulator with the power to issue binding directives, request data, and impose substantial fines. The architecture is structurally closer to IMDA than to Brussels' DSA Board model.
If both vectors land together, Brazilian platforms could be answering simultaneously to courts applying the STF's Article 19 framework, an administrative regulator under PL 2630, and the executive's existing emergency powers under the Marco Civil da Internet and decree-level cybersecurity instruments. That is a recipe for unpredictable enforcement.
Why the comparison matters
The Singapore design has one underrated virtue: clarity about who decides what and how fast. A platform receiving an IMDA directive knows the legal basis, the time window, and the appeal pathway. The Brazilian alternative — a court-led regime supplemented by an administrative regulator-in-waiting — risks giving platforms three different stopwatches running at once, with different harm thresholds and different review mechanisms.
From a pro-innovation standpoint, the lesson is not that Brazil should copy Singapore. The Singapore model is comfortable with administrative discretion in a way that sits uneasily with Brazil's robust constitutional speech protections under Article 5 of the 1988 Constitution and the inter-American human rights framework. Importing IMDA-style directive powers wholesale would almost certainly fail an STF proportionality test, and rightly so.
But the procedural rigor of the Singapore approach is worth borrowing. Specifically:
- Tightly defined harm categories. OCHA enumerates the criminal harms it targets. Brazilian "disinformation" framings, by contrast, remain dangerously elastic — and elastic categories have a way of swallowing political speech.
- Time-bound directives with a clock that everyone can see. If Brazil is going to expect rapid action on the most serious content, the threshold for what counts as "rapid" should be in primary legislation, not improvised through emergency injunctions.
- An independent appeal route. Singapore's reviewing tribunal is imperfect, but at least it exists on the face of the statute. PL 2630's drafts have been notably thinner on due-process scaffolding.
The forum-shopping problem
For platforms with regional hubs — TikTok's APAC operations in Singapore, Meta's Latin America footprint via São Paulo — the divergence creates real operating costs. A piece of content might attract a same-day IMDA disabling directive in Singapore while a Brazilian court is still scheduling a hearing on the same material. Conversely, the STF's notice-based duty might compel removal of content that Singapore would treat as protected commentary. Trust-and-safety teams will, inevitably, lean toward whichever regime moves fastest — and that is rarely the regime with the most procedural protection.
What good looks like
Brazilian legislators finalising PL 2630 have a narrow window to learn from Singapore without copying it. Keep the regulator narrow: a small set of clearly defined illegal-content categories, statutory time limits, mandatory transparency reporting, and an independent appeal body that is not the executive. Resist the temptation to fold loosely defined "disinformation" into administrative directive powers. And — critically — preserve the STF's role as the ultimate guardrail on speech, rather than letting an administrative regulator displace judicial review.
Singapore has shown that fast, defined, administrative content directives can coexist with a rules-based platform ecosystem. The question for Brazil is whether it can adapt the procedural discipline without inheriting the deference to executive discretion. Getting that balance right is not just good for platforms — it is the difference between a regulator that protects users and one that becomes a chokepoint on legitimate speech.