Why Regulating Telegram is a Dangerous Fantasy That Will Backfire on Ofcom

Why Regulating Telegram is a Dangerous Fantasy That Will Backfire on Ofcom

Ofcom is bringing a knife to a digital gunfight, and they don't even realize the knife is made of cardboard.

The UK regulator’s investigation into Telegram’s alleged failure to curb child sexual abuse material (CSAM) is being hailed as a "landmark moment" for the Online Safety Act. It isn't. It is a performative gesture that fundamentally misunderstands how encryption, decentralized infrastructure, and global jurisdiction actually work. While the mainstream press treats this as a simple case of a "bad actor" platform needing a timeout, the reality is far more uncomfortable: regulators are chasing a ghost they cannot catch, and their pursuit risks shattering the very privacy architecture that protects billions.

The Encryption Fallacy

Most reporting on the Ofcom-Telegram saga operates on a flawed premise: that Telegram can simply "flip a switch" to scrub illicit content without compromising user security.

Here is the technical reality that regulators choose to ignore. Telegram exists in a gray area of architecture. While it is not "end-to-end encrypted" (E2EE) by default for standard chats—a point many critics use to demand intervention—it operates on a distributed server model designed specifically to thwart state-level overreach. Pavel Durov didn’t build Telegram to be a social network; he built it to be an anti-censorship fortress.

When Ofcom demands "proactive monitoring," they are essentially demanding a backdoor. You cannot have a "secure" door that also opens for the government on command. History shows us that once a vulnerability is created for "the good guys," the bad guys are usually the first ones to walk through it.

I’ve watched regulators try this dance with WhatsApp and Signal for years. They frame it as a choice between "child safety" and "corporate greed." That is a false dichotomy. The real choice is between a world with private communication and a world where every message is a public record accessible to the highest bidder or the most persistent hacker.

The Jurisdiction Trap

Ofcom’s bark is loud, but its bite is geographically limited.

The Online Safety Act claims to have "global reach" because it targets services used by UK citizens. This is a jurisdictional hallucination. Telegram is headquartered in Dubai. Its infrastructure is fragmented across multiple continents. If Ofcom levels a fine—even one totaling 10% of global turnover—Telegram has very little incentive to pay it if the alternative is fundamentally breaking their product’s core value proposition: defiance of state control.

Imagine a scenario where the UK blocks Telegram. Within twelve minutes, every teenager in London would have a VPN installed. Blocking a platform doesn't remove the content; it simply pushes the actors further into the dark, into unindexed corners of the web where law enforcement has zero visibility. By trying to "clean up" Telegram, Ofcom is effectively incentivizing the migration of illicit networks to P2P (peer-to-peer) platforms that have no central authority to subpoena.

We are watching the "Streisand Effect" play out at a policy level. The more the UK government signals that Telegram is the Wild West, the more they signal to illicit actors exactly where to set up shop—and the more they signal to legitimate users that the government is coming for their private data.

The Moderation Myth

The "lazy consensus" among pundits is that "Big Tech just needs to hire more moderators."

This assumes that CSAM detection is a human-scale problem. It isn't. Telegram hosts billions of messages per day. Even with the most sophisticated hashing algorithms—like Microsoft’s PhotoDNA or Google’s Content Safety API—detection is an arms race. Criminals use "adversarial perturbations"—tiny, invisible-to-the-human-eye changes to image pixels—to bypass automated filters.

If Ofcom forces Telegram to implement client-side scanning (scanning images on your phone before they are sent), they are effectively turning every smartphone in the UK into a government surveillance device. This isn't a "step toward safety"; it’s the end of the Fourth Amendment (or the UK equivalent of privacy rights).

Regulators talk about "safety by design." What they actually mean is "surveillance by design."

Why the Current Approach Fails

If you want to stop the distribution of CSAM, you don't go after the pipes; you go after the source.

Targeting the platform is the path of least resistance for a regulator because it creates a visible "win" for the evening news. Investigating, infiltrating, and dismantling the actual criminal networks behind the content is hard, expensive, and largely invisible to the public.

  • Platform-centric regulation: Easy to write, impossible to enforce, creates massive collateral damage to privacy.
  • Offender-centric enforcement: Difficult to execute, requires international police cooperation, actually works.

Ofcom is choosing the former because it justifies their budget and satisfies the political urge to "do something." But we have seen this movie before. When the US passed FOSTA-SESTA to curb sex trafficking, it didn't stop trafficking. It made it more dangerous for victims by pushing the activity into the shadows and stripping away the digital trail that police previously used to track offenders.

The Price of "Safety"

Let’s be brutally honest about the trade-off.

If we give Ofcom the power to dictate what an encrypted or semi-encrypted platform must monitor, we are handing a blank check to every future administration. Today it is CSAM—a cause everyone agrees is noble. Tomorrow it is "misinformation." The day after, it is "political dissent."

The infrastructure of oppression is always built with the bricks of "protection."

I have seen companies spend millions on compliance departments that do nothing but check boxes to satisfy regulators, while the actual rot on their platforms continues unabated. Compliance is not safety. A "transparency report" is not a shield for a child.

Telegram’s refusal to play ball isn't just "tech bro" arrogance. It is a recognition that the moment you concede the principle of private communication, you lose the game entirely.

The Unconventional Reality

The real question isn't "Why won't Telegram moderate?" The question is "Why are we expecting a messaging app to solve a systemic failure of global law enforcement?"

Law enforcement agencies are chronically underfunded in the cyber-crime sector. They rely on "low-hanging fruit" from platforms like Facebook, which have built-in backdoors for data requests. When they hit a wall like Telegram, they don't ask for better tools or more undercover agents; they ask for the wall to be torn down.

Ofcom is attempting to regulate the internet as if it’s a broadcast television station from 1985. They are looking for a "Licensee" to punish. But the internet is not a broadcast; it’s a conversation. You can’t fine a conversation into submission.

The Actionable Truth

If you are a policymaker, stop trying to break encryption. Instead, pivot your resources toward:

  1. Direct Police Funding: Shift the focus from "regulating platforms" to "funding international cyber-task forces" that can infiltrate these groups.
  2. Harm Reduction: Focus on the real-world interventions that prevent abuse before it becomes a digital file.
  3. Technical Realism: Accept that 100% safety is a lie. Any system that promises it is either lying to you or selling you a surveillance state.

Ofcom’s investigation will likely result in a massive fine. Telegram will likely ignore it or engage in a decade of legal theater. The UK government will claim victory. The content will move to a different app.

And your privacy will be the only thing that actually gets destroyed.

Stop looking for the "report" button to save the world. It’s a placebo for a much deeper disease.

IG

Isabella Gonzalez

As a veteran correspondent, Isabella Gonzalez has reported from across the globe, bringing firsthand perspectives to international stories and local issues.