When the Romanian Constitutional Court annulled the country's presidential election on December 6, 2024, it did more than void the surprise first-round win of fringe candidate Călin Georgescu. It set in motion a regulatory test that is quietly rewriting the rulebook of European intermediary liability. Eleven days later, the European Commission opened formal proceedings against TikTok under the Digital Services Act (DSA), citing risks to civic discourse and electoral processes. Nearly eighteen months on, that probe has become the most consequential interpretation of the DSA to date — and a sign that Europe's version of Section 230 is, for all practical purposes, gone.
From safe harbor to systemic duty
For two decades, Article 14 of the 2000 e-Commerce Directive functioned as Europe's analog to the U.S. Section 230: an intermediary that lacked actual knowledge of illegal content and acted expeditiously to remove it once notified was generally shielded from liability. It was imperfect, but it gave hosting services breathing room to scale without becoming surrogate editors.
The DSA preserves the conditional liability shield in name — Articles 4 through 6 essentially port the old framework forward. But the regulation layers on a parallel regime of systemic duties for Very Large Online Platforms (VLOPs) that operate independently of any specific piece of content. Recommender-system risk assessments, mitigation plans, crisis protocols, transparency reporting on coordinated inauthentic behavior, data access for vetted researchers: these obligations apply whether or not a single illegal post is ever flagged. Non-compliance can trigger fines of up to 6% of global annual turnover.
The TikTok-Romania case is the first time the Commission has used these tools against a platform on the explicit grounds that its design choices — not its content moderation failures — may have distorted a national election.
What the Commission is actually investigating
According to the Commission's December 17, 2024 announcement, the proceedings focus on three areas: TikTok's risk-assessment and mitigation of recommender-system manipulation, its policies on political advertising and paid-for political content, and its handling of coordinated inauthentic behavior in the run-up to the November 24 first-round vote. Romanian and EU officials had documented an unusually coordinated TikTok push behind Georgescu in the final two weeks of the campaign, with thousands of accounts amplifying his content while the platform's political-ads ban created an enforcement blind spot.
Notably, the Commission is not alleging that TikTok hosted illegal content it failed to take down. It is asking whether the platform's architecture — the recommendation algorithm, the labelling regime, the influencer-marketing detection systems — adequately addressed foreseeable risks to electoral integrity. That is a fundamentally different theory of liability.
Why this matters beyond TikTok
This shift carries real benefits. Electoral interference is a genuine harm, and a pure notice-and-takedown regime is a poor fit for influence operations that exploit the recommender system rather than violate any specific content rule. Asking VLOPs to demonstrate due diligence on systemic risks is, in principle, a reasonable evolution.
But it also carries real costs that deserve clear-eyed assessment:
- Vague standards invite over-removal. If platforms face six-percent-of-turnover fines for failing to mitigate ill-defined "systemic risks" to civic discourse, the rational response is to throttle borderline political content during every election in any of the EU's 27 member states. That is bad for speech, bad for challenger candidates, and bad for the democratic resilience the DSA is meant to protect.
- The asymmetry favors incumbents. Compliance with VLOP duties — risk assessments audited by approved firms, data-access programs, dedicated compliance officers — is expensive. A regime that smaller competitors cannot afford to enter entrenches the very dominance European policy elsewhere claims to oppose.
- Attribution is hard. Coordinated inauthentic behavior in Romania has been variously attributed to Russian operators, domestic political networks, and ordinary organic enthusiasm misread as coordination. Penalizing a platform for failing to detect activity that experienced intelligence services struggle to attribute risks turning recommender systems into a regulatory casino.
A proportionate path forward
The DSA is not going away, and it should not. But the Commission's interpretation in the TikTok case will set a precedent that propagates across every VLOP designated under the regulation. A few principles would keep enforcement proportionate without abandoning the legitimate goals of the framework:
First, process over outcome. Platforms should be assessed on whether they conducted reasonable, documented risk assessments and acted on the findings — not on whether bad actors nonetheless succeeded. Second, transparency obligations must come with safe harbors. A platform that opens its recommender data to vetted researchers and good-faith mitigators should gain liability protection, not new exposure. Third, thresholds for systemic-risk findings need to be high. Election-period anomalies are common; treating every one as a Commission-worthy enforcement matter will neither deter foreign interference nor preserve the speech environment.
Europe is right to ask more of dominant platforms than the e-Commerce Directive once did. But the lesson of Bucharest is not that conditional immunity must die. It is that intermediary liability frameworks built for static hosting do not map onto algorithmic curation — and that the replacement must be precise, predictable, and respectful of the speech interests it claims to defend. The TikTok proceedings will set that template. Getting it right matters far beyond Romania.