Britain's Online Safety Act (OSA) has moved from statute book to live enforcement at speed. Ofcom's illegal-harms duties took effect in March 2025; the child-safety and age-assurance duties for pornography and content harmful to minors followed in July 2025. By the spring of 2026, the regulator has opened formal investigations into platforms including 4chan and a cluster of file-sharing services for suspected failures to comply with information notices and illegal-harms duties. The architecture is now real, the fines are now possible, and the trade-offs the bill's critics warned about are now unavoidable.
Where enforcement stands
Ofcom's powers under the OSA are unusually broad for a UK regulator. It can compel information, demand risk assessments, fine companies up to £18 million or 10% of qualifying worldwide revenue (whichever is greater), and ultimately seek business-disruption measures against non-cooperative services. The current investigations are a stress test of that toolkit. According to Ofcom's own enforcement bulletins, the probes focus on whether platforms have produced adequate illegal-harms risk assessments and responded properly to statutory information requests — the procedural floor of the Act, before any substantive content question is even reached.
That sequencing matters. If a regulator cannot get answers to basic questions about a platform's safety systems, every downstream duty is unenforceable. Ofcom is right to pursue non-responsive services. The harder question is what happens next: how the regulator interprets the substantive duties, particularly for smaller platforms, encrypted services, and content that is lawful but contested.
The age-assurance frontier
The July 2025 rules require services likely to be accessed by children to deploy "highly effective" age assurance for adult content. Major adult platforms have rolled out third-party age-verification flows; some smaller services have geo-blocked UK users rather than incur compliance costs. Both responses are predictable.
From a pro-innovation lens, two risks deserve more attention than they have received:
- Privacy externalities. Robust age assurance typically requires sharing identity signals — a face scan, a credit-card check, a digital ID — with either the platform or a vendor. The Information Commissioner's Office has published guidance pushing for data-minimising approaches, but the market is young, the vendors are concentrated, and breaches in this category are uniquely sensitive.
- Compliance asymmetry. Established platforms can absorb verification costs; UK-based startups and community-run forums often cannot. The Act's tiering mitigates this only partly, because illegal-harms duties apply broadly regardless of size.
4chan, file-sharers, and the limits of deterrence
The investigations into 4chan and several file-sharing services illustrate the Act's reach and its limits. These are exactly the kinds of services Parliament had in mind: platforms with weak moderation incentives and significant illegal-content exposure. Yet jurisdictional enforcement against services with no UK assets, no UK staff, and no commercial interest in the UK market is genuinely hard. Ofcom can issue fines that will not be paid and seek court orders that may be slow to bite. Business-disruption measures — effectively asking ISPs and payment providers to throttle access — raise serious open-internet concerns and should be the last, not the first, tool reached for.
This is not an argument against enforcement; it is an argument for selectivity. Ofcom's credibility over the next two years will rest on whether it brings clear, well-evidenced cases against genuinely non-compliant actors — and resists the temptation to use the same machinery against mainstream platforms over editorial judgement calls.
The speech costs are real
The Wikimedia Foundation's judicial review of the Category 1 designation rules, heard in 2025, made the point bluntly: rules designed for engagement-optimised social networks fit badly onto open, volunteer-edited reference projects. Wikimedia warned that identity-verification duties would be incompatible with Wikipedia's contributor model. The High Court declined to strike down the regulations but flagged that Ofcom must apply them proportionately. That instruction should ring loudly across Ofcom's wider enforcement docket.
The Open Rights Group and other civil-society groups have likewise documented over-removal of legitimate content by platforms hedging against OSA liability — a familiar pattern from every intermediary-liability regime, from Germany's NetzDG to India's IT Rules. Lawful speech is the predictable casualty when platforms face severe penalties for under-removal and only diffuse reputational costs for over-removal.
What proportionate enforcement looks like
The UK can lead on online safety without recreating the worst features of harder-edged regimes. Three principles should anchor the next phase:
- Process over content. Penalise demonstrable failures of risk assessment, transparency, and cooperation — not editorial outcomes Ofcom dislikes.
- Smallest-effective-tool. Reserve business-disruption measures for the narrowest set of demonstrably non-compliant foreign actors, with clear judicial oversight.
- Privacy-preserving age assurance. Treat data-minimising methods (zero-knowledge proofs, on-device checks, reusable digital IDs with strong governance) as the default standard, not the ceiling.
The Online Safety Act is now the most consequential content-moderation statute in the English-speaking world. It can become a template for liberal-democratic regulation or a cautionary tale about regulatory overreach. The difference will be made not in the legislation itself, but in how Ofcom uses the discretion the legislation hands it. The early signs — targeted investigations into clearly non-cooperative platforms — are encouraging. The harder tests are still to come.