EU social media disinformation enforcement

Brussels' DSA Test on X: Why Process Beats Content Policing

The EU's enforcement against X tests whether the DSA stays a transparency regime — or drifts into content arbitration that chills speech.

EU DSA Enforcement Against X: By The Numbers People of Internet Research · EU 6% Max DSA fine cap Of global annual turnover per the … 3 Preliminary breach counts Dark patterns, ad transparency, re… Dec 2023 Proceedings opened First formal DSA action against a … Open Disinformation strand status Active into 2026 as of the latest … peopleofinternet.com

Key Takeaways

Nearly two years after the European Commission opened formal proceedings against X under the Digital Services Act, the case is still the most consequential test of how Brussels intends to wield its new platform rulebook. The Commission's preliminary findings of July 12, 2024 concluded that X had breached DSA obligations on three counts: deceptive design around its blue checkmark, an advertising repository that fell short of statutory transparency requirements, and restrictions on researcher access to public data. A parallel investigation strand on the dissemination of illegal content and the effectiveness of X's measures against information manipulation remains open into 2026.

From a pro-innovation, pro-speech vantage point, the case is genuinely two stories. The first — transparency and procedural obligations — is the DSA working as designed. The second — the open-ended disinformation strand — is where the regime risks slipping its moorings.

The good part: transparency is a legitimate ask

The DSA's core innovation was not to deputise the Commission as Europe's content moderator. It was to demand auditable transparency from very large online platforms: published terms, accessible ad libraries, vetted researcher access under Article 40, and structured risk assessments. These are procedural duties. They tell platforms what to disclose, not what to host.

On those grounds, the preliminary findings against X are defensible. A blue checkmark that historically signalled editorial verification was repurposed into a paid status indicator without redesign — a textbook case of the kind of dark pattern the DSA's Article 25 was written to capture. An ad repository missing required fields makes it harder for journalists, regulators, and competitors to scrutinise political and commercial advertising. And throttling researcher access undercuts the very mechanism through which the public — not just Brussels — can audit a platform's claims.

None of these are content rulings. They are about whether the platform tells the truth about itself. That is a fair line for a regulator to draw, and one that supports rather than undermines a healthy public sphere.

The risky part: the disinformation strand

The still-open investigation into X's handling of "illegal content" and "information manipulation" is the harder case. Disinformation is not, in the European legal tradition, generally illegal. The Commission has been careful to frame the inquiry around systemic risk assessment and the effectiveness of mitigation, rather than the take-down of specific posts. But systemic-risk language is elastic, and the temptation to translate political dissatisfaction with a platform's editorial choices into regulatory pressure is real.

This is the concern civil liberties groups have flagged consistently. As digital rights organisations including EFF have argued in adjacent debates over youth social media bans and emergency takedown powers, regulation built on contested empirical claims — that a given platform feature "causes" measurable societal harm — tends to produce broad rules with narrow evidentiary basis. The same caution applies here. A finding that X's recommender system insufficiently mitigates "manipulation" risks would, in practice, push the platform to suppress lawful but disfavoured speech to satisfy a regulator. That is the policy outcome the DSA's drafters publicly disclaimed.

Why the distinction matters now

The X case is precedent-setting because it is first. Whatever doctrine the Commission articulates here — about the line between procedural breach and editorial substitution, about the standard of proof for systemic risk, about the proportionality of fines that can reach up to 6% of global annual turnover — will define how the DSA is enforced against every other Very Large Online Platform, from Meta to TikTok to AliExpress to, eventually, smaller European competitors that grow into scope.

A clean enforcement narrative would settle the dark-pattern, ad-repository, and researcher-access counts on the merits; impose proportionate remedies tied to specific design fixes; and close the disinformation strand without converting it into a precedent for content-level supervision. A messier outcome — a headline fine bundling all four issues, with the disinformation findings doing most of the rhetorical work — would invite every future commissioner to treat the DSA as a general-purpose lever over platform editorial choices.

What proportionate enforcement looks like

The bigger picture

Europe got something important right with the DSA: it built a transparency-first regime in an era when other jurisdictions were reaching for outright bans, age-gating mandates, and criminal liability for executives. That comparative advantage holds only if the Commission resists the gravitational pull of content arbitration. The X case is where that discipline gets tested in public. Brussels' choice — process or content — will shape the European internet for the rest of the decade.

Sources & Citations

  1. European Commission preliminary findings on X under the DSA (July 12, 2024)
  2. Digital Services Act — official text (Regulation 2022/2065)
  3. EFF — caution on weak-evidence platform regulation
  4. European Commission — formal proceedings opened against X (December 2023)
Share this analysis: