Australia Australia Online Safety Act eSafety commissioner

Australia's Under-16 Social Media Ban: Why eSafety's World-First Experiment Should Worry Democracies Everywhere

Six months into Australia's age-gated internet, the costs to privacy, speech, and innovation are coming into focus — and other governments are copying the template.

Australia's Age-Gated Internet, By the Numbers People of Internet Research · Australia 16 Minimum age for accounts Floor set by the 2024 Social Media… A$49.5M Maximum platform penalty Civil penalty for systemic complia… Dec 2025 Ban took effect World-first national under-16 soci… 8+ Industry codes in force Codes covering search, apps, messa… peopleofinternet.com

Key Takeaways

On December 10, 2025, Australia became the first democracy to legally bar teenagers under 16 from holding accounts on major social media platforms. Five months in, eSafety Commissioner Julie Inman Grant continues to wield expanded powers under the Online Safety Act 2021 — issuing transparency notices, enforcing industry codes, and policing the Basic Online Safety Expectations — while platforms scramble to deploy age-assurance systems at scale. The architecture is novel. The trade-offs are not. And as similar proposals advance in California, the United Kingdom, and across the EU, the Australian experiment is rapidly becoming the world's most consequential test of whether age-gating the internet can be reconciled with an open, rights-respecting digital society.

The Online Safety Amendment (Social Media Minimum Age) Act 2024 is, in its drafting, narrow: it places the compliance burden on platforms — not parents or children — and threatens fines of up to AUD $49.5 million for systemic failures to take "reasonable steps" to keep under-16s off covered services. In practice, this has forced Meta, TikTok, Snap, X, and others to build or contract age-verification stacks that touch virtually every Australian internet user, because you cannot reliably exclude minors without first assessing the age of adults. The government's own age-assurance technology trial, delivered in late 2025 by the UK-based Age Check Certification Scheme, found that no single method is both highly accurate and privacy-preserving — a conclusion regulators have nonetheless treated as a green light.

A regulator with extraordinary reach

The eSafety Commissioner is now arguably the most powerful internet regulator in the democratic world. Under the Online Safety Act, the office can issue removal notices for cyberbullying material targeting children, image-based abuse, and "class 1" content, with 24-hour compliance windows. The Basic Online Safety Expectations, updated in 2024, allow the Commissioner to demand transparency reports on everything from recommender systems to generative-AI safeguards, backed by civil penalties for non-response. Several industry codes — covering search, app stores, messaging, and equipment providers — entered force across 2024 and 2025, layering further obligations on top of the age-minimum regime.

None of this is intrinsically illegitimate. Cyberbullying causes real harm. Child sexual abuse material must be removed swiftly. But the steady accretion of discretionary authority in a single statutory officer, exercised through informal pressure as often as formal notices, raises questions a mature regulatory democracy should be asking out loud. Inman Grant's prior attempt in 2024 to force X to globally remove footage of a Sydney church stabbing — overturned by the Federal Court of Australia — was an early warning that the office's reach can exceed its remit. The under-16 ban, with its requirement that platforms make age determinations about every user, hands that same office a permanent stake in identity infrastructure.

The age-assurance trap

Every credible age-assurance method comes with a serious cost. Government-ID checks create centralised honeypots of sensitive data. Facial-age-estimation tools — increasingly the default — have documented accuracy gaps across skin tones, ages near the threshold, and gender presentation, and they normalise biometric scanning as a condition of online participation. "Inference" approaches that profile users from behavioural signals trade one privacy harm for another. The eSafety Commissioner has signalled regulatory tolerance for a range of approaches, but the predictable outcome is that the cheapest, most invasive options will dominate at the long tail of the market.

The Electronic Frontier Foundation, reviewing the wave of social media age bans now spreading from Canberra to Sacramento, has been blunt: these laws "impose a dangerous new system" of identity verification on adults and minors alike, with speech and privacy consequences that fall hardest on the most marginalised. That framing deserves serious engagement, not dismissal. LGBTQ+ teenagers, young people in abusive households, and politically active minors in diaspora communities have historically used social platforms as lifelines; cutting them off, or forcing them onto unregulated alternatives, is not a costless intervention.

What proportionate regulation looks like

A pro-innovation, evidence-based approach to youth online safety would start from three principles. First, design duties beat blanket bans: requiring platforms to offer high-privacy defaults, robust parental tools, and friction on engagement-maximising features for minor accounts addresses the actual harms — sleep disruption, algorithmic amplification of self-harm content, predatory contact — without conscripting every adult into a verification regime. Second, transparency must be reciprocal: if the eSafety Commissioner can demand audit-grade data from platforms, the office's own enforcement decisions, age-assurance approvals, and informal directions should be equally legible to the public. Third, sunset clauses and independent review are non-negotiable for any regime that touches identity and speech at this scale; the Act's statutory review, due in 2026, should be treated as a genuine off-ramp, not a rubber stamp.

Even OpenAI, in publicly endorsing the bipartisan Kids Online Safety Act in the United States this month, framed its support around default settings, parental controls, and accountability — not categorical exclusion. That distinction matters. The Australian model has skipped the harder design work and reached straight for the prohibition lever, betting that platform liability will force technology and behaviour to follow. It is a bet other democracies — California's pending under-16 restrictions among them — are increasingly tempted to copy.

Australia has chosen to run the experiment for the rest of us. The least the rest of us can do is watch it honestly: track the displacement of teen users to encrypted and offshore services, audit the privacy footprint of age-assurance vendors, and demand that the eSafety Commissioner show her working. A safer internet for young people is a worthy goal. An identity-gated internet, governed by an unaccountable regulator, is not the same thing — and policymakers from Sacramento to Brussels should not pretend otherwise.

Sources & Citations

  1. EFF — global social media age bans critique
  2. Australian eSafety Commissioner — Online Safety Act and codes
  3. Online Safety Amendment (Social Media Minimum Age) Act 2024
  4. MediaNama — OpenAI endorses KOSA, framing design-based safety
Share this analysis: