When Australia's Online Safety Amendment (Social Media Minimum Age) Act took effect in December 2025, it made the country the first democracy to legally bar under-16s from holding accounts on major social platforms. Five months on, the policy is no longer a theoretical experiment. It is a live regulatory regime — and the early signals suggest that the gap between political ambition and technical reality is wider than Canberra hoped.
The law, championed by Prime Minister Anthony Albanese and passed with bipartisan support in late 2024, requires Meta, TikTok, Snapchat, X and other designated services to take “reasonable steps” to prevent under-16s from creating or holding accounts. Enforcement sits with the eSafety Commissioner, with civil penalties of up to AUD 49.5 million for systemic non-compliance.
The Implementation Picture So Far
Platforms began deploying age-assurance systems in late 2025, drawing on the technical options identified in the government-commissioned Age Assurance Technology Trial, whose final report was published by the Department of Infrastructure in mid-2025. Approaches in the wild include:
- Facial age estimation — selfie-based AI models from vendors like Yoti and Incode, used to flag likely under-16 users for further verification.
- ID-based verification — government ID upload or digital identity checks, typically only for users who fail the estimation tier.
- Behavioural signals and account-graph analysis — using existing platform data to infer likely age, then triggering challenges.
The trial concluded that age assurance is “technically feasible” but acknowledged meaningful error rates, privacy trade-offs, and accessibility concerns — particularly for teens without government ID and for users from communities historically underrepresented in face-recognition training data.
What's Going Right
It is too early for definitive verdicts, but a few things have gone better than critics predicted. Platforms did not refuse to comply or withdraw services. The eSafety Commissioner has so far taken a graduated supervisory posture rather than rushing to maximum penalties. And there is real evidence that many under-16 accounts have been deactivated or transitioned to supervised experiences — Meta and TikTok both confirmed mass account actions in their Australian transparency disclosures earlier this year.
Public support has held up too. Polling by the Australia Institute and others throughout 2025 consistently showed majorities of parents backing the law. For a government navigating a noisy debate about youth mental health, that political durability matters.
Where the Cracks Are Showing
But several problems are now visible:
1. VPNs and workarounds
Australian press has reported sharp increases in VPN downloads since the law took effect, alongside teens openly discussing methods to spoof age checks. This is the predictable consequence of a country-level ban on services that are global by design. The eSafety Commissioner has acknowledged platforms are not expected to defeat determined circumvention, but the policy's deterrent value erodes the more visible the workarounds become.
2. Privacy and data-minimisation tensions
Requiring platforms to assess every user's age — including adults — creates exactly the kind of identity-data honeypot that privacy regulators in Europe and Australia have spent a decade trying to discourage. The Office of the Australian Information Commissioner has flagged concerns about scope creep, and civil liberties groups including Digital Rights Watch have called for stronger statutory guardrails on retention and secondary use.
3. The displacement question
Teens are not leaving the internet; they are migrating. Anecdotal reports point to greater use of gaming chat, Discord servers, encrypted messengers, and smaller forums that fall outside the designated services list. If the harms the law was designed to address — bullying, exposure to harmful content, sleep disruption — simply reappear on less moderated platforms, the policy will have shifted risk rather than reduced it.
4. Uneven impact on smaller services
The economics of age assurance fall hardest on smaller platforms. Designated services with global compliance budgets can absorb the cost; smaller Australian-built apps and communities face per-user verification fees that can be existential. Without careful calibration, the law risks entrenching incumbents — the opposite of what a healthy internet ecosystem needs.
A Proportionate Path Forward
The Albanese government has committed to a statutory review of the law's operation. That review should be the moment to ask honest questions: Has youth wellbeing actually improved? Have harms migrated? What are the privacy costs? Are smaller services being squeezed?
From a pro-innovation, proportionate-regulation perspective, the case is not that Australia was wrong to act on youth online safety — it is that bans are a blunt instrument that should be paired with, and where possible replaced by, more targeted tools: default safety settings for minors, friction in recommender systems, transparency mandates, and meaningful research access for independent scientists. The UK's Age Appropriate Design Code and the EU Digital Services Act's risk-assessment framework offer models that focus on platform design rather than user exclusion.
The lesson of Australia's first five months is not that the under-16 ban has failed. It is that bans alone cannot do the work the public expects of them — and that the burden of evidence must rise as the policy matures.
Other governments — including Indonesia, the UK, and several US states — are watching Australia closely. The most important thing Canberra can do now is publish honest data, commission rigorous independent evaluation, and resist the temptation to declare victory before the evidence is in.