Five months after Australia's Online Safety Amendment (Social Media Minimum Age) Act took effect in December 2025 — barring under-16s from TikTok, Instagram, Snapchat, X and similar services — Singapore is publicly weighing whether to import a version of the model. Officials at the Ministry of Digital Development and Information (MDDI) and the Infocomm Media Development Authority (IMDA) have signalled that age-assurance approaches for minors are under active study, with Singapore continuing to coordinate with the Australian eSafety Commissioner through the Global Online Safety Regulators Network.
For a jurisdiction that has long prided itself on calibrated, outcomes-based digital regulation — the Code of Practice for Online Safety, the Online Criminal Harms Act, and the Online Safety (Miscellaneous Amendments) Act — the temptation to follow Canberra is real. The political logic is intuitive: parents are anxious, the harms are emotionally salient, and a hard floor of 16 makes for a clean headline. The policy logic is far less tidy.
What Australia Actually Built — and What It Hasn't Yet Proven
The Australian law makes platforms, not parents or children, responsible for taking "reasonable steps" to prevent under-16s from holding accounts. The eSafety Commissioner, Julie Inman Grant, has been empowered to issue civil penalties of up to roughly A$49.5 million per breach. Crucially, the statute does not mandate any specific technology — facial age estimation, document upload, behavioural inference and parental vouching are all theoretically in scope, with the Commissioner's guidance evolving in parallel with the government-commissioned Age Assurance Technology Trial led by the UK firm Age Check Certification Scheme.
That trial's interim findings, released in 2025, were honest about the limits of the technology: facial age estimation works reasonably well at the population level but is materially less accurate for some demographics, document checks raise data-minimisation concerns, and no single method is both privacy-preserving and highly accurate at the individual level. Five months into enforcement, there is no public, peer-reviewed evidence that Australian teenagers are spending less time online, encountering less harmful content, or reporting better mental health. There is, however, growing anecdotal evidence of migration to less-moderated platforms, VPN use, and parental workarounds — precisely the substitution effects researchers had warned about.
Singapore's Comparative Advantage Is Calibration, Not Imitation
Singapore's regulatory tradition has been to avoid blunt prohibitions in favour of duties of care, transparency obligations, and technology-neutral standards. The 2023 Code of Practice for Online Safety applies systemic obligations to designated services — risk assessments, child safety tools, user reporting — without telling platforms how to build them. That approach has aged well. By contrast, age-gating laws in jurisdictions from Utah to France have been repeatedly enjoined, narrowed, or quietly de-emphasised after constitutional and practical problems emerged.
A pro-innovation, proportionate Singapore response would start from three principles:
- Evidence before mandate. Wait for credible outcome data from Australia and the UK Online Safety Act's children's codes (in force from 2025) before legislating a minimum age. A two-year evidence window costs little and avoids locking in a costly architecture that may not work.
- Privacy-by-design age assurance, not identity verification. If age signals are required for higher-risk features, they should be derived through on-device estimation, zero-knowledge attestations, or platform-held signals — not by routing every teenager's passport through a centralised database. The PDPC's own guidance on data minimisation cuts against ID-upload models.
- Feature-level, not platform-level, controls. The harms regulators worry about — algorithmic amplification of self-harm content, unsolicited DMs to minors, addictive design — are features inside platforms, not the platforms themselves. Targeting features is both more effective and far less restrictive of teenagers' legitimate speech and access to information.
The Free-Speech and Access Costs Are Real
Singapore's Constitution protects expression more narrowly than Australia's implied freedom of political communication, but the policy costs of cutting an entire age cohort off from the dominant public squares are universal. Under-16s are not just consumers of content; they organise study groups, run small creator businesses, access health information, and — in a region where LGBTQ+ youth and dissenting voices already face offline pressures — find communities they cannot find at home. A blanket ban does not distinguish between a 15-year-old watching exam-prep videos and one in a self-harm spiral.
What Singapore Should Do Instead
The Global Online Safety Regulators Network is genuinely useful infrastructure for cross-border takedown coordination and shared threat intelligence. But coordination should not collapse into convergence on the most restrictive option on offer. Singapore can lead by:
- Publishing a transparent, peer-reviewed evaluation framework before any age-assurance pilot.
- Commissioning independent research on substitution effects — VPN use, migration to encrypted or fringe platforms, and the welfare impact on isolated youth.
- Strengthening the existing Code of Practice with auditable design-code obligations modelled on the UK ICO's Age Appropriate Design Code, rather than a hard ban.
- Investing in digital literacy through MOE and Media Literacy Council — the intervention with the strongest long-run evidence base.
Australia's experiment deserves close study, not quick imitation. Singapore's edge has always been that it regulates technology with the seriousness of an engineer rather than the urgency of a campaigner. On under-16 social media, the engineer's answer is: measure first, mandate later, and never assume a ban is the same thing as a solution.