When the Online Safety Act 2023 finally received Royal Assent after six years of drafting, ministerial reshuffles, and the celebrated removal of its 'legal but harmful' adult content duties, it was sold as a settled compromise. The Act would target genuinely illegal material, leave lawful speech to flourish, and trust Ofcom to enforce proportionately. Eighteen months into illegal-harms enforcement, that settlement is fraying. The question facing Parliament in 2026 is whether to honour it or rewrite it under the political pressure of the next viral crisis.
What the Act Actually Does Today
Since 17 March 2025, in-scope services have been legally required to assess and mitigate the risk of illegal content reaching UK users, under Ofcom's illegal harms codes of practice. The list of priority offences is broad: terrorism, child sexual exploitation, fraud, intimate image abuse, and — relevantly here — the new 'false communications offence' under Section 179 of the Act, and the 'foreign interference offence' imported from the National Security Act 2023.
Section 179 criminalises sending a message containing information the sender knows to be false, with the intention of causing 'non-trivial psychological or physical harm' to a likely audience, and without reasonable excuse. The threshold is deliberately high. It captures malicious hoaxes; it does not capture being wrong, being partisan, or being a bad-faith commentator. The foreign interference offence requires conduct on behalf of a foreign principal with an intent to interfere in UK democratic or legal processes.
Together these offences give Ofcom a takedown lever against the worst categories of online deception — coordinated foreign influence operations and deliberately injurious lies — without conscripting platforms into adjudicating the truth of ordinary public debate. That is a defensible line.
The Southport Pressure
The line is now under sustained political pressure. After the July 2024 Southport attack, false claims about the attacker's identity and immigration status spread rapidly on X, Telegram and TikTok, contributing to the worst riots England had seen in over a decade. The Science, Innovation and Technology Committee has repeatedly asked whether the OSA, as enacted, can cope with that pattern of harm, and ministers have floated reopening the 'legal but harmful' question for misinformation specifically.
The instinct is understandable. The policy is not. Three problems are worth naming.
1. The collective-action problem the Act already addresses
Much of the worst Southport content was arguably already illegal — incitement to violence, harassment, and in some cases Section 179 false communications. The enforcement gap was not statutory; it was operational. Ofcom only began consulting on its illegal-harms codes in late 2023 and finalised them at the end of 2024. The proper test of whether the OSA works for crises like Southport is whether the 2025 regime would catch a re-run — not whether to bolt on a new regime before the first one has been tried.
2. The definitional trap
'Misinformation' has no stable legal meaning. The previous government wisely abandoned 'legal but harmful' duties for adults in 2022 after sustained warnings — from this think tank and many others — that they would push platforms toward systemic over-removal of contested political speech. Asking Ofcom or platforms to designate categories of lawful-but-false content for suppression resurrects exactly that problem, with the added complication that public-health and geopolitical 'consensus' positions of 2020 (lab-leak, vaccine side-effects, Ukraine battlefield claims) have repeatedly shifted.
3. The chilling effect on smaller services
The OSA's tiered duties already impose significant compliance costs. Ofcom estimates that around 100,000 services fall within scope. Expanding duties to legal-but-harmful misinformation would disproportionately burden smaller UK-based platforms, forums, and Mastodon-style services that lack the trust-and-safety budgets of Meta or Google. The likely effect is consolidation toward the very large platforms ministers say they want to discipline.
What Proportionate Enforcement Looks Like in 2026
Ofcom has signalled tougher enforcement against the largest platforms this year, with potential fines of up to £18 million or 10% of qualifying worldwide revenue. That is a serious lever and should be used — against demonstrable failures to meet existing illegal-content duties, including under Section 179 and the foreign interference offence.
- Test the existing regime first. Before legislating further, Parliament should require Ofcom to publish a structured post-incident review of any major online-driven public-order event, assessing where the illegal-harms codes succeeded and where they failed.
- Resist category creep. 'Misinformation' should not become a freestanding regulatory category. If specific harms (e.g. AI-generated election deepfakes impersonating real candidates) need targeted rules, legislate narrowly and with sunset clauses.
- Invest in media literacy and rapid-response counter-speech. The Online Safety Act gives Ofcom a media literacy duty under Section 11 — a tool underused relative to takedown powers. Funding it properly is cheaper and freer than expanding the censorship perimeter.
- Protect smaller services. Any new duties should be calibrated to platform size and risk, consistent with the proportionality principle the Act already embeds.
The Real Test
Britain has, after years of drift, arrived at one of the most coherent online-harms frameworks in the democratic world: illegal content gets meaningful enforcement; lawful speech, including speech regulators dislike, is left alone. That settlement is not a bug to be patched after the next viral crisis. It is the feature. The job in 2026 is to make it work — not to dismantle it.