US children online safety age verification

America's Age-Gate Moment: How the GUARD Act Could Turn the Open Internet Into an ID Checkpoint

A new US bill would extend age verification far beyond 'dangerous AI' — pulling general-purpose internet services into a de facto identity regime.

Age Verification Comes for the Open Internet People of Internet Research · US Broad GUARD Act covered services Bill reaches AI chatbots and gener… Jun 2025 SCOTUS Paxton ruling Upheld age-gating only for adult c… 2 KOSA Senate sponsors Bipartisan bill led by Senators Bl… EFF + ACLU Major opposing groups Both warn of First Amendment and p… peopleofinternet.com

Key Takeaways

In April 2026, the GUARD Act began advancing in the US Congress, the latest entry in a fast-growing genre of American child-safety legislation that uses age verification as its core enforcement tool. Pitched by sponsors as a guardrail against "dangerous AI," the bill in practice reaches much further: civil liberties groups including the Electronic Frontier Foundation and the ACLU have warned that its definitions sweep in AI chatbots, recommendation systems, and a long tail of general-purpose internet services that ordinary teenagers — and adults — use every day.

Combined with the Kids Online Safety Act (KOSA), sponsored by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) and recently endorsed by OpenAI, the GUARD Act marks an inflection point. Washington is no longer debating whether to age-gate the internet. It is debating how broad the gate should be.

From Adult Sites to the Whole Internet

The constitutional backdrop matters. In June 2025, the Supreme Court decided Free Speech Coalition v. Paxton, upholding Texas's HB 1181 age-verification law for sites whose content is at least one-third "sexual material harmful to minors." Reviewing the law under intermediate scrutiny, the Court accepted that states could require ID checks to keep minors away from pornography — a narrow holding tied to a category of speech long treated as obscene for children.

What Paxton did not do is bless age verification for everything else. The opinion was explicit that the underlying speech was unprotected as to minors. General-purpose platforms — search engines, social media, AI assistants, news sites — host vast quantities of fully protected First Amendment speech. Applying a Texas-style ID mandate to those services raises a very different constitutional question, and one the Court has not answered.

The GUARD Act and KOSA both try to walk through that open door. KOSA imposes a "duty of care" on covered platforms to mitigate harms to minors, paired with default settings and parental controls. The GUARD Act, by contrast, leans directly on access restrictions: services that fall within its scope must verify users' ages before allowing minors to interact with covered AI tools, and in practice that means verifying everyone.

Why "Verify Everyone" Is the Real Cost

There is no technical way to confirm a user is over 18 without first checking the age of users who are. Age verification mandates are therefore identity mandates in disguise. The choices are familiar and unattractive:

None of these comes free. EFF has warned that the rapid global spread of age-verification rules — from California's social media restrictions to the UK Online Safety Act to Australia's under-16 social media ban — is converging on a single architecture: an internet where access to lawful information is conditioned on proving who you are. The chilling effect on anonymous speech, whistleblowing, LGBTQ+ youth seeking community, and adults reading politically sensitive material is not a hypothetical edge case. It is the predictable consequence.

The AI Angle: Real Risks, Wrong Tool

It is worth taking the safety concerns seriously. AI chatbots have produced documented harms involving minors, and the industry's record on age-appropriate design is uneven. OpenAI's recent endorsement of KOSA — and of Illinois SB 315, a frontier AI safety bill — reflects a real industry view that AI "must not repeat social media's mistakes." That instinct is correct.

But the GUARD Act's approach confuses the diagnosis with the prescription. The harms regulators most often cite — sycophantic chatbots, sexualized AI roleplay, suicide ideation loops — are product design problems. They are addressable through model-level safety training, default safe modes for unverified users, rate-limited sensitive topics, and crisis-routing tooling. None of those interventions requires a national identity layer on top of the open web.

A proportionate framework would:

What Comes Next

A committee vote on the GUARD Act was expected in late April or early May 2026, and KOSA continues to move in parallel. Whichever bill clears first will shape the constitutional test cases that follow. Lower courts will have to decide what Paxton means when the speech being gated is not adult content but the everyday substance of the internet.

People of Internet's position is straightforward: children's safety online is a real problem that demands serious, evidence-based responses. But mandatory age verification across general-purpose platforms is a sledgehammer aimed at a wiring problem. It risks entrenching identity surveillance, freezing out smaller competitors, and producing a more closed, more brittle internet — without measurably reducing the harms it claims to address. Congress can do better, and the First Amendment may eventually require that it does.

Sources & Citations

  1. EFF on the global age-verification wave
  2. OpenAI endorses KOSA (MediaNama)
  3. Free Speech Coalition v. Paxton (SCOTUS opinion)
Share this analysis: