In early 2025, the European Commission and the European Board for Digital Services formally converted the long-running Code of Practice on Disinformation — a voluntary instrument that platforms had been signing since 2018 — into a Code of Conduct under the Digital Services Act (DSA). On paper it is a technical move. In practice it is one of the most consequential shifts in European speech regulation in a decade: a soft-law arrangement that platforms could exit at will is now wired directly into the DSA's systemic-risk regime, where non-compliance can trigger fines of up to 6% of global annual turnover.
The shift coincides with the Commission's ongoing DSA proceedings against X, opened in December 2023 over alleged failings around illegal content, dark patterns, advertiser transparency, and risks tied to electoral integrity. With the Code now a benchmark for what 'appropriate' mitigation of disinformation looks like under Article 35 of the DSA, regulators have a structured yardstick they previously lacked. That is good for legal certainty. It is also, for anyone who cares about free expression online, a moment to be honest about what this framework can and cannot legitimately do.
From handshake to hammer
The 2022 Strengthened Code committed signatories — Meta, Google, Microsoft, TikTok, and others — to demonetise disinformation, label political ads, empower fact-checkers, and submit transparency reports. X withdrew from the Code in May 2023, a decision that, under the new architecture, is no longer a costless exit: adherence to a recognised Code of Conduct is now one way platforms can demonstrate DSA compliance, and refusal to participate becomes a factor regulators weigh when assessing systemic risk.
That is a meaningful change. Voluntary codes were criticised for being toothless; the DSA was criticised for being vague about how to mitigate 'societal' harms. Bolting them together solves both problems for the regulator. It solves neither problem for the user.
The definitional problem hasn't gone anywhere
The DSA, to its credit, does not require platforms to remove lawful-but-disputed content. Recital 12 and Article 14 make clear that 'misinformation' as such is not illegal in EU law, and the Commission has repeatedly insisted the DSA is not a takedown regime for political speech. The Code's obligations are framed around process: risk assessments, ad transparency, researcher data access, and demonetisation of repeat offenders.
But process obligations cast a long shadow. When a regulator can fine a platform billions of euros for inadequate 'mitigation' of disinformation risk, the rational response is to over-mitigate. Civil society groups including the Electronic Frontier Foundation and Access Now have warned for years that systemic-risk language, however carefully drafted, creates structural pressure toward collateral over-removal — particularly during elections, when the political cost of under-moderating is far higher than the cost of silencing a legitimate dissenter.
The X test case
The Commission's preliminary findings against X in mid-2024 focused on verified-account 'blue checks' as deceptive design, opacity in the ad repository, and obstacles to researcher access — areas where the DSA's case is on firm ground because the obligations are concrete and procedural. The harder question is what happens if the proceedings extend, as some Member States have urged, into substantive judgments about how X handled specific election-period narratives.
That is the line the EU has so far been careful not to cross. It should stay careful. A regulator deciding, after the fact, that a platform's algorithmic ranking of a contested political claim amounted to a systemic risk failure is a regulator deciding what political speech looks like at scale. No democratic mandate authorises that — and the European Court of Human Rights has consistently held, in cases from Handyside onward, that speech which 'offends, shocks or disturbs' is precisely the speech Article 10 protects.
What proportionate enforcement looks like
There is a version of this regime that strengthens the information ecosystem without sliding into speech control. It looks like:
- Hard obligations on transparency, soft obligations on content. Force ad libraries, recommender-system disclosure, and researcher access. Do not force editorial outcomes.
- Crisis protocols with sunset clauses. The DSA's Article 36 crisis mechanism should never become a standing channel for content guidance.
- Independent appeals with teeth. Out-of-court dispute settlement bodies under Article 21 must actually overturn wrong decisions, not rubber-stamp them.
- Smaller-platform carve-outs. Compliance costs that VLOPs can absorb will crush the federated and open-source alternatives the EU claims to want.
The bigger picture
Europe's misinformation framework will be exported. The UK's Online Safety Act, Brazil's Marco Civil reform debates, and India's IT Rules all borrow from the DSA's vocabulary. If Brussels normalises the idea that 'disinformation mitigation' is a legitimate object of regulatory enforcement against private platforms, governments with weaker rule-of-law guarantees will normalise the same idea — and the carefully drafted procedural safeguards will not survive the translation.
Integrating the Code into the DSA was probably inevitable; the legal vacuum was untenable. But the test of this framework is not whether it can punish X. It is whether, five years from now, a journalist publishing an unpopular minority view on a VLOP can still be confident that no algorithm has been tuned to bury her in the name of European 'information integrity'. That is a high bar, and Brussels should hold itself to it.