Support our mission to provide fearless stories about and outside the media system
Packed with exclusive investigations, analysis, and features
SUBSCRIBE TODAY
A new UK parliamentary report has confirmed what many of us in civil society have been saying for years: the Online Safety Act isn't working. But it's not just failing to stop harmful misinformation and abuse - it's failing to even understand how those harms spread.
Here's the crux of it: 0.1% of users are responsible for 80% of the disinformation circulating online. That's not a free speech problem - that's a virality problem. And yet, our laws treat every user, every post, and every platform interaction as if it carries equal weight and risk, carving out special exemptions for those users who are most likely to share demonstrably false information. Pretending this isn't the case is why the Act is falling short.
The Illusion of Equal Speech
The idea that all users are created equal online is seductive - and yet completely detached from reality. In practice, influence on digital platforms is wildly unequal. A tiny minority of accounts generate the vast majority of views, shares, and clicks. These users aren't just louder - they're structurally amplified by algorithms that reward outrage and attention.
But the Online Safety Act doesn't account for any of that.
It applies obligations to platforms based on content type, not content impact. A hate-fuelled post seen by ten people is treated the same as a near-identical post seen by ten million just because the latter was posted by a prominent political figure or commentator. And that's before you account for the fact that some of the most harmful content comes from voices the Act goes out of its way to protect.
UPDATE
Keir Starmer's Government Still Hasn't Reviewed Use of Musk's X Despite Platform 'Amplifying Hate Speech and Misinformation'
Downing Street continues to refuse to explore communicating through alternative platforms, like Bluesky, despite the role of X in last summer's violent disorder
Josiah Mortimer
The result is a law that fails to distinguish between an anonymous troll with 11 followers and a tabloid with a multimillion-person reach - despite the fact that one is far more likely to go viral and do real-world harm.
This isn't about whether someone can say something online. It's about whether those "free speech" rights extend to the structural incentivising and algorithmic boosting for maximum reach and, by extension, maximum profit. That's what these platforms do best. That's also what the law ignores.
Instead, the Act leans heavily on individual content moderation. But that kind of whack-a-mole approach is hopeless against modern information ecosystems. The real threat lies in the systems themselves that allow, and even encourage disinformation, hate speech, and conspiracies to spread at scale.
The most viral content online isn't necessarily the most true, the most thoughtful, or the most important. It's the most clickable - and often the most harmful. The Online Safety Act should recognise that amplification is what turns harmful ideas into societal threats. But it doesn't. And that's a systemic failure.
A Two-Tier Internet
Worse still, the Act doesn't just overlook structural power - it actively entrenches it. Even the completely insufficient rules in the Act still have three key carve-outs which create what can only be described as a two-tier internet:
A media exemption means that content published by outlets meeting low and easily gamed criteria is shielded from takedown - even if it's harmful and even if it targets children.
A journalism exemption ensures that anyone claiming to be publishing "for the purposes of journalism" gets added protections - with no obligation to demonstrate responsible practice or editorial oversight.
A democratic importance exemption gives special status to content that contributes to political debate - which, in practice, privileges politicians and party-affiliated voices....