Singapore Australia Online Safety Act eSafety commissioner

Singapore Eyes Australia's Under-16 Social Media Ban — But Should the Lion City Follow?

As MDDI and IMDA study age-assurance models, Singapore faces a choice between Australia's hard ban and a more proportionate, evidence-led path.

Singapore's Choice on Under-16 Social Media People of Internet Research · Singapore 16 Australian minimum age Floor set by Australia's 2024 Act,… A$49.5M Max platform penalty Per-breach civil penalty under Aus… ~5 Months of enforcement data Time since Australia's ban took ef… 10+ Regulators in GOSRN Members of the Global Online Safet… peopleofinternet.com

Key Takeaways

Five months after Australia's Online Safety Amendment (Social Media Minimum Age) Act took effect in December 2025 — barring under-16s from TikTok, Instagram, Snapchat, X and similar services — Singapore is publicly weighing whether to import a version of the model. Officials at the Ministry of Digital Development and Information (MDDI) and the Infocomm Media Development Authority (IMDA) have signalled that age-assurance approaches for minors are under active study, with Singapore continuing to coordinate with the Australian eSafety Commissioner through the Global Online Safety Regulators Network.

For a jurisdiction that has long prided itself on calibrated, outcomes-based digital regulation — the Code of Practice for Online Safety, the Online Criminal Harms Act, and the Online Safety (Miscellaneous Amendments) Act — the temptation to follow Canberra is real. The political logic is intuitive: parents are anxious, the harms are emotionally salient, and a hard floor of 16 makes for a clean headline. The policy logic is far less tidy.

What Australia Actually Built — and What It Hasn't Yet Proven

The Australian law makes platforms, not parents or children, responsible for taking "reasonable steps" to prevent under-16s from holding accounts. The eSafety Commissioner, Julie Inman Grant, has been empowered to issue civil penalties of up to roughly A$49.5 million per breach. Crucially, the statute does not mandate any specific technology — facial age estimation, document upload, behavioural inference and parental vouching are all theoretically in scope, with the Commissioner's guidance evolving in parallel with the government-commissioned Age Assurance Technology Trial led by the UK firm Age Check Certification Scheme.

That trial's interim findings, released in 2025, were honest about the limits of the technology: facial age estimation works reasonably well at the population level but is materially less accurate for some demographics, document checks raise data-minimisation concerns, and no single method is both privacy-preserving and highly accurate at the individual level. Five months into enforcement, there is no public, peer-reviewed evidence that Australian teenagers are spending less time online, encountering less harmful content, or reporting better mental health. There is, however, growing anecdotal evidence of migration to less-moderated platforms, VPN use, and parental workarounds — precisely the substitution effects researchers had warned about.

Singapore's Comparative Advantage Is Calibration, Not Imitation

Singapore's regulatory tradition has been to avoid blunt prohibitions in favour of duties of care, transparency obligations, and technology-neutral standards. The 2023 Code of Practice for Online Safety applies systemic obligations to designated services — risk assessments, child safety tools, user reporting — without telling platforms how to build them. That approach has aged well. By contrast, age-gating laws in jurisdictions from Utah to France have been repeatedly enjoined, narrowed, or quietly de-emphasised after constitutional and practical problems emerged.

A pro-innovation, proportionate Singapore response would start from three principles:

The Free-Speech and Access Costs Are Real

Singapore's Constitution protects expression more narrowly than Australia's implied freedom of political communication, but the policy costs of cutting an entire age cohort off from the dominant public squares are universal. Under-16s are not just consumers of content; they organise study groups, run small creator businesses, access health information, and — in a region where LGBTQ+ youth and dissenting voices already face offline pressures — find communities they cannot find at home. A blanket ban does not distinguish between a 15-year-old watching exam-prep videos and one in a self-harm spiral.

What Singapore Should Do Instead

The Global Online Safety Regulators Network is genuinely useful infrastructure for cross-border takedown coordination and shared threat intelligence. But coordination should not collapse into convergence on the most restrictive option on offer. Singapore can lead by:

Australia's experiment deserves close study, not quick imitation. Singapore's edge has always been that it regulates technology with the seriousness of an engineer rather than the urgency of a campaigner. On under-16 social media, the engineer's answer is: measure first, mandate later, and never assume a ban is the same thing as a solution.

Sources & Citations

  1. Australian Government — Online Safety Amendment (Social Media Minimum Age) Act overview
  2. eSafety Commissioner — Social media age restrictions
  3. IMDA — Code of Practice for Online Safety
  4. Global Online Safety Regulators Network
  5. UK ICO — Age Appropriate Design Code
Share this analysis: