On 10 December 2025, Australia became the first country in the world to enforce a national social media minimum age. Five months later, the Online Safety Amendment (Social Media Minimum Age) Act 2024 has moved from political talking point to operational reality — and the genuinely difficult questions are only now coming into focus.
The law obliges designated platforms — currently including Meta's Facebook and Instagram, TikTok, Snapchat, X and Reddit — to take 'reasonable steps' to prevent under-16s from holding accounts. Penalties for systemic failure can reach AUD 49.5 million per breach, and enforcement sits with the eSafety Commissioner, who has been publishing compliance expectations through the first half of 2026.
What 'Reasonable Steps' Actually Requires
The Act is deliberately framed as a duty on platforms, not on users. Australians are explicitly not required to hand over government ID, and the law forbids platforms from making official identity documents the only path to age assurance. That choice shifts the compliance burden onto private companies — which must now build, license or buy age-assurance technology that works at population scale.
What that looks like in practice has been shaped by the Age Assurance Technology Trial, commissioned by the Australian government and led by the UK-based Age Check Certification Scheme. Its reported findings concluded that no single technology was perfectly accurate, but that combinations of facial age estimation, behavioural signals and account-history analysis could plausibly meet the 'reasonable steps' bar. Meta and TikTok have since rolled out estimation tools, and several platforms began deactivating accounts they flagged as belonging to under-16s in the weeks before and after the December commencement.
The Case Behind the Law
The political momentum is not hard to understand. Successive Australian inquiries — from the eSafety Commissioner's research on online harms to long-running coverage of teen mental-health and bullying — built a story that mainstream parents found compelling. Prime Minister Anthony Albanese's government framed the ban as a clear, enforceable line in a debate that had spent a decade producing little more than industry self-regulation. Polling reported through 2025 consistently showed strong public support.
None of that should be dismissed. Algorithmic amplification of harmful content to young users is a real problem, and platform responses have, charitably, been uneven.
Why Proportionality Still Matters
But 'do something' is not the same as 'do the right thing,' and the design of the Australian model raises serious questions that the next twelve months will test.
The first is efficacy. Age-estimation models, even at their best, have meaningful error rates around the 16-year boundary. False positives sweep up 17- and 18-year-olds; false negatives miss determined 14-year-olds, particularly those willing to use a VPN or borrow an older sibling's device. A measure billed as a hard line can quickly become a soft filter — useful at the margins, but a long way from the absolute protection its political framing implies.
The second is privacy. Pushing every Australian user through some form of age check — facial scan, document upload or behavioural inference — is a structural change to how the consumer internet works in this country. Even when platforms promise that biometric data is processed on-device and discarded, the architecture of compliance creates new data flows, new vendors and new attack surfaces. The eSafety Commissioner's guidance has rightly emphasised data minimisation, but the underlying tension is real: stronger age assurance and stronger privacy pull in opposite directions.
The third is fragmentation. The Act applies only to 'age-restricted social media platforms,' a definition that has already produced contested edge cases — most visibly the political fight over whether YouTube belongs inside or outside the perimeter. Messaging apps, many gaming platforms and a long tail of smaller services sit in grey zones. A patchwork ban that pushes minors from named platforms toward less-moderated alternatives is not obviously a child-safety win.
What We're Watching
The right test for the law is empirical, not rhetorical. Over the next year, three indicators will tell us whether the experiment is working:
- Actual exposure, not account counts. A drop in registered under-16 accounts is easy to measure and easy to game. The harder question is whether time spent on relevant platforms by under-16s has meaningfully fallen — and whether mental-health and harms indicators move with it.
- The displacement effect. Independent researchers, not just platforms, need access to data on where Australian teenagers are going instead. If under-16s migrate to encrypted messaging or unregulated forums, the harm surface may grow rather than shrink.
- Privacy and security incidents. Any breach involving age-assurance data — particularly biometric data — will be a defining moment for public trust in the regime.
A Test Case for the Open Internet
Other jurisdictions are watching closely. The UK's Online Safety Act framework, the EU's Digital Services Act age-appropriate design obligations and a growing list of US state laws are all converging on age assurance as a regulatory tool. Australia's experiment will become an evidence base — for better or worse — that those regimes draw on.
Our position is straightforward. Protecting children online is a legitimate and urgent policy goal. But proportionate regulation means measuring outcomes, not announcements; respecting privacy as a co-equal value, not a footnote; and being honest when a high-profile intervention is producing more compliance theatre than child safety. Australia has put a marker down. The next twelve months should be spent rigorously testing whether it works — and being willing to change course if it does not.