When Mark Zuckerberg announced on January 7, 2025 that Meta would wind down its third-party fact-checking program in the United States and replace it with a Community Notes–style system inspired by X, he framed the change as a correction — too many takedowns, too much friction, too little trust. Sixteen months later, the experiment has become the central case study in a much bigger question: can a speech-permissive moderation model survive contact with the European Union's Digital Services Act (DSA), Brazil's evolving intermediary liability doctrine, and a patchwork of national rules from Delhi to Canberra?
The short answer is: probably yes, but only if regulators resist the temptation to mistake a particular moderation technique for a compliance outcome. That distinction is where the next two years of global content policy will be fought.
What Meta Actually Changed
Meta's January 2025 policy update did three things in the US: it ended partnerships with independent fact-checkers (IFCN-accredited organisations such as PolitiFact, Lead Stories, and Reuters Fact Check), rolled out a crowdsourced "Community Notes" feature modelled on the system X inherited from Twitter's Birdwatch, and loosened several Hateful Conduct policies — most notably around speech on immigration and gender. Meta also said it would relocate trust-and-safety teams from California to Texas and reduce automated enforcement on lower-severity policy violations.
Crucially, the changes applied to Facebook, Instagram, and Threads in the United States. Meta has repeatedly stated that obligations under the EU's DSA and other regional regimes remain in force outside the US — but it has not committed to keeping the legacy fact-checking program running indefinitely in those markets. That ambiguity is what regulators have been probing.
Europe: The DSA Stress Test
The European Commission moved quickly. Within days of the announcement, Commissioner Virkkunen and Commission services reminded Meta that Facebook and Instagram, designated as Very Large Online Platforms (VLOPs) under the DSA, must conduct annual systemic risk assessments covering disinformation, civic discourse, and electoral integrity (Articles 34–35), and must adopt proportionate mitigation measures audited by independent third parties.
Importantly, the DSA does not mandate third-party fact-checking. It mandates risk management. A Community Notes–style system can, in principle, satisfy that obligation — but Meta would need to demonstrate, with data, that it reduces the spread of illegal content and verifiably manipulated information at least as effectively as the program it replaces. The Commission has already opened DSA proceedings against X partly on this question, so Meta is not entering uncharted territory.
This is the proportionate path. Brussels should hold Meta to outcome-based standards, not prescribe a single moderation methodology.
Brazil: The Sharper Edge
Brazil has been the more confrontational jurisdiction. The Attorney General's Office (AGU) wrote to Meta in early 2025 demanding clarification on whether the US policy changes would extend to Brazilian users — a country where Meta platforms reach an estimated 140 million people and where elections and public-health discourse have been repeatedly stress-tested by viral misinformation.
Brazil's Supreme Federal Tribunal (STF) had separately been reinterpreting Article 19 of the Marco Civil da Internet, signalling that platforms can face liability for certain categories of clearly unlawful content even absent a specific court order. Combined with proposed Fake News Bill provisions (PL 2630) that have circulated in Congress since 2020, the Brazilian environment is structurally less tolerant of a hands-off moderation posture than the US.
Meta's response — that DSA-style and Marco Civil obligations are honoured locally and that Community Notes is being piloted rather than substituted wholesale — has not fully satisfied AGU officials, but it has so far avoided enforcement action.
The Policy Principle Worth Defending
The temptation in moments like this is to legislate the moderation method. That would be a mistake.
- Crowdsourced annotation is not inherently inferior to professional fact-checking. Peer-reviewed research on X's Community Notes (including studies in 2023–2024 by MIT and the University of Luxembourg) found that notes which reach consensus tend to be accurate and reduce sharing of misleading posts. The bottleneck is latency and coverage, not accuracy.
- Professional fact-checking has real costs. Centralised arbiters of "truth" carry their own legitimacy problems, particularly on contested scientific or political questions where consensus is still forming.
- Outcome-based regulation works. The DSA's risk-assessment framework, if enforced rigorously, will tell us whether Meta's new approach reduces measurable harm — without forcing every platform into the same operational template.
What to Watch
Meta's first full DSA risk assessment report under the new moderation regime is expected later in 2026 and will be the first hard data point. The European Board for Digital Services will almost certainly scrutinise it. Brazil's STF rulings on Article 19, several of which are pending final reasoning, will set the liability floor for any platform operating there. And US courts, particularly in the wake of Moody v. NetChoice, continue to constrain how aggressively American legislatures can mandate moderation outcomes.
The right global posture is not to force Meta back to its old program, nor to celebrate the new one as a free-speech victory. It is to insist that platforms — whichever methodology they choose — demonstrate that their systems reduce measurable harm, respect users' speech rights, and remain auditable. That is what proportionate regulation looks like.