Australia children online safety age verification

Australia's Under-16 Social Media Ban: A World-First Experiment in Age-Gating the Internet

Five months into enforcement, Australia's ban on under-16s using social media is exposing the limits of age assurance — and the costs of regulation that outpaces technology.

Australia's Under-16 Ban by the Numbers People of Internet Research · Australia 16 Minimum age Threshold for holding a social med… A$49.5M Maximum platform fine Civil penalty for systemic non-com… 10 Dec 2025 Effective date Start of enforcement against desig… 0 Fully reliable age check Methods the ACCS trial rated fully… peopleofinternet.com

Key Takeaways

On 10 December 2025, Australia became the first country to legally bar children under 16 from holding social media accounts. The Online Safety Amendment (Social Media Minimum Age) Act 2024 requires platforms designated by the Minister — including TikTok, Instagram, Snapchat, Facebook, X and YouTube — to take 'reasonable steps' to detect and deactivate accounts held by under-16s, with non-compliance carrying civil penalties of up to A$49.5 million. Five months in, the policy is being closely watched by regulators from London to Brasília. It is also, predictably, running into the very problems its critics warned about.

What the Act actually requires

The law does not criminalise children, parents, or even individual account holders. It places the obligation squarely on platforms to prevent under-16s from creating or maintaining accounts on 'age-restricted social media platforms', a category defined by the Minister and currently covering the major consumer social apps. Messaging services, online gaming, education platforms, and (after intense lobbying) YouTube were initially excluded — though YouTube was ultimately swept back in, a reversal that drew sharp criticism from Google and from child-development researchers who pointed to the platform's role in homework, music and special-interest content.

Crucially, the statute is technology-neutral. It does not mandate any specific age-verification method. The eSafety Commissioner has issued guidance suggesting platforms can use age inference, document checks, parental vouching, or third-party age assurance providers — so long as the overall approach is judged 'reasonable'. That ambiguity is doing a lot of work.

The age assurance trial: no silver bullet

The government-commissioned Age Assurance Technology Trial, run by the UK-based Age Check Certification Scheme (ACCS) and reported in 2025, tested dozens of providers across facial age estimation, ID-based verification, parental controls and behavioural inference. Its top-line finding was unambiguous: no single method is fully reliable across the population. Facial age estimation tools clustered around the right age but struggled at the critical 13–17 boundary, with error margins typically of a year or more. ID-based checks were more accurate but introduced friction and serious privacy exposure. Behavioural inference raised obvious questions about surveillance of minors.

The trial's nuanced conclusion — that 'age assurance can be done' but not perfectly, and only through layered approaches — has been read very differently by different camps. Ministers cite it as validation; civil liberties groups, including Digital Rights Watch, read the same report as evidence that the law's premise is technically unworkable without mass data collection.

Early effects: workarounds, withdrawals, and a privacy chill

Reports from the first months of enforcement suggest a predictable pattern. Platforms have rolled out age-estimation tools and stepped up account deletions — Meta, Snap and TikTok have each disclosed bulk removals of suspected under-16 accounts. At the same time, surveys by Australian researchers and outlets including the ABC and The Guardian indicate that many teenagers are migrating to VPNs, lying about their age, or moving to platforms not (yet) covered by the designation — Discord servers, Roblox, and smaller forums. A meaningful share appear to be using parents' accounts.

The privacy cost is also becoming visible. Adults who never previously had to prove their age to post a photo or read a feed are now being asked for selfies or government ID. The eSafety Commissioner has been clear that platforms must offer at least one privacy-preserving option and must not retain ID documents beyond what is necessary, but the architecture of mass age-checking has been built — and once built, it tends to be reused.

A proportionate critique

The instinct behind the law is understandable. Australian parents, like parents everywhere, are anxious about the documented links between heavy social media use and adolescent mental health, and about algorithmic amplification of harmful content. But good intentions do not make good regulation. Three problems stand out:

A better path

Proportionate regulation would focus on the design of products used by minors rather than on excluding minors from them: default-private accounts, algorithmic transparency, restrictions on engagement-maximising design patterns aimed at children, and meaningful enforcement of existing duties under the Online Safety Act 2021 and Australia's Privacy Act reforms. The UK's Age Appropriate Design Code and the EU's emerging child-safety guidance under the Digital Services Act point in this direction. They are imperfect, but they target conduct rather than mere presence.

Australia has taken a bold swing. Other governments — France, Norway, several US states — are watching to see whether the ban actually moves the needle on teen wellbeing, and at what cost. The early signal is that the policy is doing more to reshape verification infrastructure than to reshape adolescent online life. That is a poor trade. The lesson for other jurisdictions is not that child safety online is unimportant — it is — but that regulating the internet by age boundary is harder, costlier, and more privacy-invasive than the headlines suggest.

Sources & Citations

  1. Online Safety Amendment (Social Media Minimum Age) Act 2024 — Parliament of Australia
  2. eSafety Commissioner — Social media minimum age guidance
  3. Age Check Certification Scheme (ACCS) — Age Assurance Technology Trial
  4. UK ICO — Age Appropriate Design Code
  5. Digital Rights Watch — analysis of Australia's minimum age law
Share this analysis: