On May 6, 2026, the European Commission issued its first-ever formal preliminary findings under Article 73 of the Digital Services Act (DSA), concluding that X (formerly Twitter) has breached obligations on dark patterns in its verified-account system, advertising transparency, and data access for vetted researchers. The decision marks a watershed moment: nearly two years after opening proceedings in December 2023, the Commission has formally moved a Very Large Online Platform (VLOP) case from investigation to a quasi-adversarial enforcement phase. Now comes the harder question — not whether the DSA will be enforced, but how.
For a regulation premised on transparency and accountability rather than top-down content rules, this is exactly the kind of test case Brussels needed. It is also the kind that could quietly redefine what intermediary liability looks like in Europe for the next decade.
What the Commission Found
The preliminary findings, summarised in the Commission's press release, focus on three discrete obligations rather than content moderation outcomes:
- Dark patterns (Article 25 DSA): the Commission considers that X's paid "blue checkmark" verification deceives users by suggesting authenticated identity when, in fact, anyone can purchase the badge.
- Advertising transparency (Article 39 DSA): X's ad repository allegedly does not enable the "required supervision and research into emerging risks" — searchability, data completeness, and API reliability are the specific concerns.
- Researcher data access (Article 40 DSA): the Commission says X imposes restrictive contractual terms and prohibitive pricing on data access tools, undermining the DSA's research-access pillar.
Critically, the Commission did not find that X has failed in content moderation against illegal content — the politically charged terrain that dominates public debate. Article 73 here is being used to enforce process and transparency obligations, which is precisely the kind of proportionate, evidence-driven enforcement the DSA's defenders promised.
A Procedural Milestone, Not a Verdict
It is essential to be precise about what preliminary findings actually are. Under Article 73, they trigger X's right of defence: the company can access the file, respond in writing, and request an oral hearing before the Commission considers a non-compliance decision. Only after that adversarial phase can fines of up to 6% of global annual turnover be imposed under Article 74, alongside potential periodic penalty payments under Article 76.
This sequencing matters. The DSA was designed to be a regulation by structured cooperation, not a content tribunal. Issuing findings is not the same as issuing fines — and treating the May 6 announcement as a foregone conclusion risks normalising a presumption-of-guilt posture that is incompatible with EU administrative due process. The Commission itself acknowledged that "X now has the rights of defence."
Why Proportionality Will Be the Real Test
From a pro-innovation standpoint, three issues should shape what comes next:
1. Calibrate remedies to actual harm
Dark-pattern findings on a verification badge are qualitatively different from systemic risks of large-scale illegal content. Any remedy — and any fine — should reflect that. A maximum penalty applied to a transparency-shaped breach would distort incentives across the entire VLOP ecosystem and chill experimentation with monetisation models that, while imperfect, are part of how platforms remain financially viable outside the surveillance-advertising paradigm the EU itself has criticised.
2. Don't let researcher-access enforcement become a backdoor liability regime
Article 40 is one of the DSA's most important innovations — opening platform data to vetted researchers is a public good. But the implementing rules, finalised in 2025, are still bedding in. Aggressive enforcement before the technical and contractual norms have stabilised could push platforms toward minimal-compliance APIs that are technically open but practically useless. Brussels and platforms should agree on baseline reference architectures rather than litigate each access dispute.
3. Preserve the firewall between conduct rules and speech rules
The strongest argument for the DSA — and the reason it deserved cautious support over more interventionist alternatives — was that it regulates systems rather than speech. The Commission's decision to anchor its first Article 73 case in dark patterns, ad transparency, and researcher access keeps that firewall intact. Future cases must do the same. Slipping into adjudication of moderation outcomes would re-open the intermediary liability settlement that Article 6 DSA (the safe harbour, inherited from the e-Commerce Directive) was meant to preserve.
The Global Signal
Other jurisdictions are watching closely. India is finalising its Digital India Act; Brazil's PL 2630 debate is reawakening; the UK's Online Safety Act is in its first enforcement cycle under Ofcom. If Europe demonstrates that a transparency-led regime can be enforced firmly but proportionately — with fines calibrated to harm and procedures honouring the rights of defence — the DSA will become an exportable template. If, instead, the X case becomes a maximalist precedent, the message global regulators take home will be very different: that the path of least resistance is content control by another name.
The DSA's credibility hinges on what happens between now and the final decision. Enforcement, yes. Proportionality, always. And a clear-eyed recognition that an open internet still depends on intermediaries that can take risks, evolve their products, and answer to law — but not be governed by it line by line.