Brussels is running two parallel experiments in algorithmic accountability at once, and both are starting to bite. The European Commission's formal proceedings under the Digital Services Act (DSA) against X, TikTok and Meta — focused on recommender system transparency, systemic risk assessments and researcher data access — remain open more than a year after they were opened. Meanwhile, the high-risk system obligations of the EU AI Act are scheduled to begin phased application from August 2026. The collision of these two regimes will define what algorithmic governance looks like in practice, not just on paper.
If Europe gets it right, it will set a global benchmark for transparent, contestable recommender systems without throttling the platforms millions of Europeans rely on. If it gets it wrong, it will burden compliant firms, frustrate independent researchers and entrench the very incumbents the rules were meant to discipline.
What the DSA proceedings actually examine
The Commission's pending cases are not really about specific pieces of content. They probe whether very large online platforms (VLOPs) — those with 45 million or more monthly users in the EU — have honoured the architectural duties the DSA imposes on them. That includes Article 27's requirement to disclose the "main parameters" of recommender systems, Article 34's duty to assess systemic risks (such as effects on civic discourse, minors and electoral integrity), and Article 40's obligation to give vetted researchers access to platform data.
These are reasonable demands on paper. Platforms with continent-scale reach should be able to explain, in non-trivial terms, why a given user sees a given post. They should be able to demonstrate that they have looked for foreseeable harms before launching new features. And independent researchers should be able to test platform claims rather than take them on faith.
The harder question is what compliance actually looks like. A risk assessment that is genuinely useful to regulators tends to expose trade secrets to competitors. A recommender disclosure that is genuinely informative to users may be impenetrable to anyone who is not already an ML engineer. And data access regimes that work for established researchers can be gamed by bad-faith actors with credentials.
The AI Act overlap problem
Layered on top is the AI Act, whose high-risk obligations begin to apply from August 2026 for a wide range of systems. Recommender systems on VLOPs are not classified as "high-risk" under Annex III, but many adjacent uses — biometric categorisation, employment screening, credit scoring and access to essential services — are. Large platforms will increasingly find themselves subject to overlapping conformity, transparency and post-market monitoring duties from two regulators speaking slightly different languages.
The DSA's lead enforcer is the European Commission. The AI Act will be enforced by a patchwork of national market surveillance authorities coordinated by the new AI Office. Without explicit coordination, firms can expect duplicative documentation requests on similar systems, with different timelines, taxonomies and penalty ceilings — up to 6% of global turnover under the DSA, and up to 7% under the AI Act for the most serious infringements.
Researcher access is good — if it is workable
One of the brightest spots in the DSA is Article 40, which contemplates real, structured access to platform data for accredited researchers. This is overdue. For years, platforms have controlled the empirical record about their own systems, periodically restricting tools — most notably when X sharply curtailed free API access in 2023 — that scholars had relied on.
But the delegated act fleshing out Article 40 has taken longer than many in the research community had hoped, and platforms have raised legitimate questions about user privacy, security and abuse of access. A proportionate path exists: tiered access depending on data sensitivity, robust vetting through the Digital Services Coordinators, and clear liability for misuse. Treating every researcher request as either sacred or suspect helps no one.
Why proportionality matters for innovation
It is fashionable to argue that platforms can absorb any compliance cost. They cannot — at least not without consequences for everyone else. Compliance fixed costs fall heaviest on mid-sized firms that aspire to scale into the VLOP tier, on European challengers competing with US and Chinese incumbents, and on open-source projects whose maintainers cannot field large legal teams. The DSA already exempts small and micro enterprises from many of its obligations; the AI Act builds in regulatory sandboxes and SME provisions. These need to work in practice, not just on paper.
Three principles should guide the next phase of enforcement:
- Single audit, multiple regulators. A platform that submits to a DSA risk assessment audit should not have to re-document the same recommender system from scratch for the AI Act.
- Outcomes over forms. Transparency reports that nobody reads are worse than narrow disclosures that researchers and journalists can actually interrogate.
- Enforcement, not lawfare. Open proceedings should close on a reasonable timeline. Indefinite uncertainty is its own kind of regulatory cost.
The road to August
The next twelve months will test whether Europe can run two flagship digital regimes in concert. Done well, the DSA-AI Act combination becomes a coherent accountability architecture: platforms explain how their systems work, regulators verify, researchers audit and users have real recourse. Done badly, it becomes a compliance maze whose chief beneficiaries are consultants and large incumbents. The Commission's choices on pending proceedings, and on the AI Office's coordination protocols, will tell us which Europe we are heading for.