On 14 July 2025, the European Commission published its long-awaited Guidelines on the Protection of Minors Online under Article 28 of the Digital Services Act, accompanied by a prototype EU age-verification app intended to give member states a privacy-preserving way to check that users are old enough to access certain services. The package is the most concrete operational guidance the Commission has issued on child safety since the DSA entered into force, and it now sits behind a widening set of formal proceedings against Meta, TikTok and a cluster of large adult-content platforms.
For policymakers in Brussels, this is a coming-of-age moment for the DSA. For platforms, investors and the wider open-internet community, it is also a moment to ask whether the guidelines strike the right balance — and whether the European model being built here will lift global standards or simply raise the drawbridge around the EU's digital single market.
What the guidelines actually require
Article 28 of the DSA imposes an open-textured obligation on any online platform accessible to minors to take "appropriate and proportionate measures to ensure a high level of privacy, safety and security of minors" on their service. The new guidelines turn that principle into a checklist that the Commission and national Digital Services Coordinators will use as a yardstick. Among the most consequential expectations:
- Default-private accounts for users known or assumed to be minors, with private profiles, restricted discoverability and limits on unsolicited messages from adults.
- Curbs on "addictive" design features — including default-off autoplay, gentler push notifications, no streaks or manipulative engagement loops, and restrictions on opaque recommender systems targeting children.
- Risk-based age assurance, with the Commission's prototype white-label app positioned as a baseline tool for member states to integrate with national eID schemes ahead of the European Digital Identity Wallet.
- Tighter controls on commercial practices, including a presumption against profiling-based advertising to minors (already prohibited under DSA Article 28(2)) and on in-app loot-box-style mechanics.
These expectations are technically non-binding. In practice, they will be the rulebook by which the Commission judges compliance. Ongoing proceedings against Meta and TikTok under DSA Articles 28, 34 and 35, and the investigations opened in 2024 against the major adult-content platforms designated as Very Large Online Platforms — Pornhub, Stripchat and XVideos — will almost certainly be assessed against this template.
The good: a serious attempt at privacy-preserving age checks
The most encouraging part of the package is the age-verification prototype itself. Built around a zero-knowledge-style architecture and intended to plug into the forthcoming EU Digital Identity Wallet, the app lets a user prove they are over a threshold age without revealing their identity, date of birth or document number to the requesting platform. That is a meaningful upgrade on the status quo, where age-gating typically forces users to upload ID documents or selfies to third-party vendors with patchy security records.
If member states adopt it cleanly, the EU could end up with something genuinely better than the UK's Online Safety Act regime, where the Ofcom-supervised rollout has leaned heavily on commercial face-estimation tools, or the patchwork of US state laws that has triggered a string of First Amendment challenges. Done well, this is a model worth exporting.
The risks: scope creep, fragmentation and a chilled open web
The concerns are about everything around the prototype. Three stand out.
First, scope creep. Article 28 covers "any online platform accessible to minors" — a phrase the Commission interprets broadly. Read literally, the guidelines apply not only to TikTok and Instagram but to any general-purpose service that does not actively exclude minors, including small forums, hobbyist communities, open-source social platforms and federated services. The DSA's tiered architecture was meant to spare smaller actors the heaviest burdens; the guidelines risk re-importing them through the back door.
Second, design micromanagement. Some of the expectations on "addictive design" — default-off autoplay, prescriptions on notification cadence, presumptions against personalised feeds for minors — drift from outcome-based regulation toward product design by committee. Recent academic work, including the EU-funded research synthesised in the Commission's own 2024 Joint Research Centre report on minors and digital technologies, finds the evidence on causal harm from specific design features more mixed than the political debate suggests. Regulating to the worst-case is how good products get worse without measurable safety gains.
Third, fragmentation. If the EU model diverges sharply from the UK Online Safety Act, Australia's social-media age limits and emerging US state regimes, global platforms will increasingly geo-fence functionality or simply withdraw features in Europe — as several have already done with generative AI products. That is bad for European consumers, worse for European start-ups trying to scale globally, and corrosive to the open internet.
A proportionate path forward
None of this is an argument against child safety online, which is a real and serious policy goal. It is an argument for a lighter, more iterative touch. Three principles would help:
- Make the age-verification app truly optional and interoperable, so platforms can comply through any method that meets a published privacy and accuracy benchmark — not only the EU-blessed one.
- Tier the design obligations honestly, focusing the prescriptive rules on the largest services with demonstrated risk profiles and leaving smaller platforms to comply through codes of conduct.
- Build in a sunset and review clause, with measurable outcomes — not feature checklists — as the test of success by 2028.
The DSA was sold as a framework that would prove Europe could regulate the digital economy without breaking it. The Article 28 guidelines are the first real test of that promise. Get them right, and the EU has a globally exportable child-safety model. Get them wrong, and Europe will have built the most elaborate walled garden on the open internet — in the name of the children growing up inside it.