The Online Safety Act: Digital Dictatorship Disguised as Child Protection
Right, let's have a proper conversation about the UK's Online Safety Act, shall we? Because while politicians are patting themselves on the back for "protecting children," they've actually created the most spectacular digital disaster since someone thought Internet Explorer was a good idea.
The Act came into full force on 25th July, and it's already falling apart faster than a British summer holiday.
Welcome to Britain's Great Firewall
In the 48 hours since this legislative monstrosity took effect, we've witnessed perhaps the most predictable cybersecurity failure in British history. ProtonVPN reported a more than 1,400 percent increase in sign-ups in the UK after age verification requirements took effect, proving what any teenager with half a brain could have told our lawmakers for free: internet restrictions don't work, they just teach people to circumvent them.
But it gets worse. People are already using Sam Porter Bridges selfies from Death Stranding's photo mode to bypass Discord's age verification system. That's right: a fictional video game character is defeating Britain's "robust" age assurance technology. Norman Reedus is apparently more convincing to AI systems than our actual regulatory framework is to the people it's meant to govern.
When your digital protection strategy can be defeated by a PlayStation screenshot, you're not running a safety program. You're running a comedy show.
The Theatre of Digital Protection
Let's examine what this act actually does, because it's important to understand the scale of this disaster. The Act requires platforms to implement "highly effective age assurance" technologies, with Ofcom issuing codes detailing specific compliance measures. But here's the brutal reality: age verification technology doesn't bloody work.
Anyone trying to watch porn online in the UK now needs to subject themselves to an awkward selfie or get their photo ID ready. We're forcing British citizens to hand over biometric data and government identification to private companies just to access legal content that's been available since the dawn of the internet.
Here's the thing that drives me absolutely mental about this entire charade: we all know it's bollocks.
I'm in my mid-50s. When I was a teenager, dirty magazines were kept on the top shelf at the newsagent, theoretically out of reach of children. Did that stop any of us from seeing them? Of course it bloody didn't. Every lad I knew had managed to get a glimpse of a Playboy or Penthouse by the time they hit puberty, whether through an older brother, a mate's dad's collection, or simply waiting for the newsagent to turn around.
The idea that you can prevent teenagers from accessing sexual content is the same fantasy now as it was 50 years ago, except now we're destroying privacy rights and building surveillance infrastructure to chase that impossible dream.
This isn't protection. It's surveillance capitalism with a government stamp of approval, wrapped in the same moral panic that's been recycling through British society since the printing press was invented.
The technology requirements are laughably inadequate. The UK's new law also makes it illegal for websites to promote VPNs that get around age verification, yet VPN usage has exploded 1,400% since implementation. It's like outlawing umbrellas and wondering why people are still staying dry in the rain.
The Failure Was Baked In From Day One
This isn't some unforeseen consequence. The Online Safety Act was designed to fail by people who fundamentally don't understand how the internet works. The previous attempt at internet age verification was abandoned in 2019 following several delays and setbacks, with the government ceasing to progress its duties under the Digital Economy Act.
We literally tried this exact same approach before. It failed spectacularly. So naturally, we decided to try it again with even more bureaucracy.
As the responsible Secretary of State, I signed off the online harms White Paper in 2019. Here we are in 2025, and the Online Safety Act is still not yet fully in force. Six years from conception to partial implementation, and it's already being defeated by teenagers with VPNs and Death Stranding screenshots.
The implementation timeline alone reveals the dysfunction. Ofcom expects to publish the register of categorised services in Summer 2025 and consult on the codes of practice for additional duties on categorised services by early 2026. We're not even halfway through implementing a law that's already proven unenforceable.
Ofcom: The Regulator That Can't Regulate
Let's talk about Ofcom's enforcement approach, because it's a masterclass in regulatory incompetence. Ofcom has launched investigations into services that failed to respond to statutory information requests or implement adequate illegal content risk assessments. Their response to non-compliance? Strongly worded letters and public naming and shaming.
Ofcom can impose fines of up to £18 million or 10% of qualifying worldwide revenue, but they're chasing companies that are ignoring them entirely. We required Kick Online Entertainment S.A to submit the record of its illegal content risk assessment to us so we could consider whether it complies with its duties under the Act. The company has failed to respond.
When your regulatory regime can be defeated by simply not answering emails, you're not running a law enforcement operation. You're running a suggestion box with delusions of grandeur.
The Technical Impossibility Nobody Mentions
Here's the part that drives me absolutely mental: the Act requires technical capabilities that don't exist. The Act requires platforms, including end-to-end encrypted messengers, to scan for child pornography, despite warnings from experts that it is not possible to implement such a scanning mechanism without undermining users' privacy.
Signal and other encrypted messaging services have threatened to withdraw from the UK market rather than break their encryption. The government has claimed that it does not intend to enforce this provision of the Act until it becomes "technically feasible" to do so.
So we've passed a law requiring technology that doesn't exist, with enforcement that depends on future technical breakthroughs that may never happen. This is like legislating that cars must fly and then claiming you'll enforce it once someone invents anti-gravity.
The Real World Consequences
While politicians debate the theoretical benefits of this digital surveillance regime, real businesses are making practical decisions. London Fixed Gear and Single Speed, a forum for fixed-gear and single-speed bicycle enthusiasts announced their closure citing the high cost of legal compliance, along with Microcosm, a provider of forum hosting for non-commercial, non-profit communities.
We're destroying British online communities to implement age verification that can be defeated by PlayStation screenshots.
And for what? To prevent something that's been happening since humans discovered reproduction? Research shows that 8% of children aged 8–14 in the UK visit online pornography sites, including 3% of 8–9-year-olds. These statistics sound alarming until you remember that previous generations managed to discover sex without the internet, without smartphones, and despite every "top shelf" restriction we could devise.
The average age children first see pornography is 13 - exactly the same age when previous generations were passing around stolen magazines behind the bike sheds.
The difference isn't that children are seeing sexual content earlier. The difference is that we now have moral entrepreneurs building careers on protecting children from experiences that every generation has had, using technology that fundamentally cannot work.
In May 2025, the Wikimedia Foundation launched a legal challenge against potential designation as a "category one" service under the Act, which would subject Wikipedia to the most stringent requirements. We're potentially going to lose access to Wikipedia because our lawmakers think they can regulate the internet like it's a television broadcast.
The American Perspective: Targeted Destruction
The international implications are equally disastrous. The UK's Online Safety Act systematically disadvantages U.S. technology companies through threshold-based requirements and global revenue penalties that specifically target American platforms' scale.
The 34 million user threshold captures leading U.S. platforms while exempting regional services and newer entrants. This discriminatory threshold forces U.S. companies to implement expensive compliance infrastructure, hire local content moderation teams, and modify global products for UK-specific requirements, while competitors below the threshold avoid these burdens entirely.
We're not protecting British children. We're handicapping British access to global technology platforms while creating competitive advantages for companies smart enough to stay small and unregulated.
The Enforcement Theatre
Ofcom's enforcement approach reveals the fundamental inadequacy of the entire regime. Ofcom has opened investigations into services that failed to respond to statutory information requests, with platforms being publicly named for non-compliance. Their strategy appears to be public shaming companies into compliance.
When your enforcement mechanism relies on companies feeling embarrassed about being mentioned in press releases, you're not running a regulatory regime. You're running a gossip column with legal pretensions.
The Children's Commissioner Network argued that one of the "most prominent" gaps in Ofcom's implementation was its failure to "come up with requirements that deliver on the act's 'safe by design' objective". Even the children's advocates think Ofcom is failing to implement the law properly.
The Circumvention Celebration
The ease with which people are defeating these measures is almost insulting to British intelligence. Perhaps the most obvious loophole for the Online Safety Act is to download or pay for a virtual private network, commonly referred to as a VPN. VPN providers are literally advertising their services as Online Safety Act circumvention tools.
We've created a law that primarily serves as a marketing opportunity for VPN companies.
"Our research shows that these are not people that are out to find porn – it's being served up to them in their feeds," Oliver Griffiths, group director for online safety at Ofcom, told The Sun. So Ofcom's own research contradicts the fundamental premise of the age verification requirements, yet they're implementing them anyway.
When your regulator admits their own requirements don't address the problem they're meant to solve, you've achieved a level of bureaucratic incompetence that would make Kafka proud.
The Privacy Catastrophe
Let's talk about what this actually means for ordinary British citizens. The UK's Online Safety Act 2023 requires platforms to implement "highly effective" and "robust" age verification systems by July 25th, 2025. These systems require biometric data, government identification, or financial verification.
We're forcing British citizens to surrender their most sensitive personal data to private companies to access legal content. Every porn site, social media platform, and user-generated content service now has databases of British citizens' faces, identification documents, and verification status.
The privacy implications are staggering. Most third-party verification services only pass an "age confirmed" status to websites, not your full details, but the verification companies themselves retain comprehensive databases of who accessed what content when.
The International Isolation
The Act's impact on international services reveals Britain's growing digital isolation. Lobsters, a programming and technology focussed discussion site, announced that they would block UK users in order to comply. We're creating a situation where British users are being excluded from global online communities.
Britain is voluntarily building its own Great Firewall, and calling it child protection.
The compliance costs are driving services away from British users entirely. Rather than implement expensive age verification for a single market, international platforms are choosing to exclude British users. We're not protecting British children; we're isolating them from global educational and cultural resources.
The Government's Own Admission of Failure
Perhaps most damning is the government's own acknowledgment that the system doesn't work. The Prime Minister's office appears to have confirmed that the OSA is not under "active review", and the focus is on getting it implemented "quickly and effectively" rather than changing it.
They know it's broken. They're implementing it anyway.
Against a backdrop of riots and disorder in Summer 2024, some have raised concern that the UK's Online Safety Act does not go far enough in tackling misinformation that can fuel disorder. The law doesn't even address the problems it was ostensibly designed to solve, yet we're pressing ahead with full implementation.
The Bottom Line: Digital Authoritarianism
The UK Online Safety Act represents the most comprehensive failure of digital policy in British history. We've created a surveillance regime that doesn't protect children, doesn't stop harmful content, and doesn't work technically, while destroying online communities and isolating British users from global platforms.
This isn't about protecting children. It's about control.
Every generation of British parents has worried about their children accessing inappropriate content. In the 1950s, it was rock and roll corrupting the youth. In the 1960s, it was television. In the 1980s, it was video nasties. In the 1990s, it was violent video games. Now it's the internet.
The pattern is always the same: moral panic, legislative overreach, unintended consequences, and then we move on to the next moral panic while leaving the surveillance infrastructure in place.
What we're witnessing isn't child protection - it's the digital equivalent of banning alcohol during Prohibition. It doesn't work, it creates massive secondary problems, and it primarily benefits criminals and the black market while undermining respect for law and authority.
The Act gives government ministers unprecedented power to designate content as "harmful" and suppress it through regulatory pressure. The Act gives the relevant Secretary of State the power to designate and suppress or record a wide range of online content that is "illegal" or "deemed harmful to children". Who decides what's harmful? The same government that thought age verification wouldn't be immediately circumvented by VPNs.
We've handed our democratic government the tools of digital authoritarianism and convinced ourselves we're protecting children while we do it. The fact that these tools don't work technically doesn't make them less dangerous politically. The infrastructure we're building today will outlast the moral panic that created it.
The Online Safety Act will never work because it was never designed to work. It was designed to give politicians something to point at when parents complain about the internet, while giving regulators and civil servants expanded powers over digital communications.
The 1,400% increase in VPN usage proves that British citizens understand what our lawmakers apparently don't: internet restrictions don't protect children, they just teach everyone to evade authority.
Pull up a chair to the circumvention party. It's the only rational response to a law this fundamentally broken.
Source | Article |
---|---|
The Register | UK VPN demand soars after debut of Online Safety Act |
ProtonVPN | Sign-ups Surge 1,400% as UK Enforces Online Safety Act Age Checks |
PC Gamer | Brits can get around Discord's age verification thanks to Death Stranding's photo mode |
Gov.UK | Online Safety Act: explainer |
Ofcom | Ofcom's approach to implementing the Online Safety Act |
House of Commons Library | Implementation of the Online Safety Act |
UK Parliament | Online Safety Act: Implementation - Hansard |
ITIF | The UK's Online Safety Act |
Wikipedia | Online Safety Act 2023 |
Hogan Lovells | What's next for the UK's Online Safety Act and can it solve the misinformation problem? |
Latham & Watkins | UK Online Safety Act — Summer 2025 Deadlines |
The Tab | All the unhinged loopholes people are using to get past the Online Safety Act |