Skip to main content Skip to footer

Nanny eState: The Online Safety Bill

Nanny Estate

As the UK Government seeks to clamp down on “legal but harmful” digital content, new legislation will have far-reaching implications for communications in the UK and beyond. What will the Online Safety Bill mean for users and digital businesses?

At the intersection of online safety, free speech and an increasingly digital society, comes yet another tranche of far-reaching legislation from the UK Government in an effort to make the country “the safest place in the world to go online.”

Channel-flogging Culture Secretary Nadine Dorries, who recently demanded to know when Microsoft were going to get rid of algorithms, argues that the new bill will target “harm, abuse and criminal behaviour,” and above all, will “protect people from anonymous trolls online.”

The law specifically gives online connectivity providers impetus to go after “legal but harmful” content – just what exactly falls under that umbrella is to be outlined at a later date, but is likely to include “self-harm, harassment and eating disorders,” among others.

There will be a set of new criminal offences targeting misinformation, cyberbullying and cyberflashing – the unsolicited sending of graphic sexual images. Technology companies also get a set of new duties, such as preventing fraudulent paid adverts appearing in news feeds and cajoling users into online scams.

Any firm breaching the rules would face a fine of £18 million or 10% of annual turnover, while repeat offenders could be blocked entirely. The bill also gives Ofcom, whose newly appointed chairman, Michael Grade, has admitted he doesn’t use social media, stronger regulatory powers to investigate breaches.


A History of Dodging Liability

Public carriers have for many years been considered not liable for the contents of material delivered by their service, and haven’t been since before the invention of the telegraph; the Carriers Act of 1830 enshrined this principle in law for stage coach proprietors, exonerating them from the loss of any transported goods – in this case, usually gold and silver coins or precious stones, rather than inflammatory social posts.

A series of landmark legal battles in the US during the early 1990s – involving the likes of CompuServe, brokerage firm Stratton Oakmont (of The Wolf of Wall Street infamy) and a Seattle magazine proprietor falsely implicated in the Oklahoma City Bombing – brought into question whether online service providers were equally responsible or not for the content shared through their platforms.

These led, in 1996, to the passage of Section 230 of the Communications Decency Act, distinguishing ISPs as interactivity providers and granting them immunity from liability for content on their services. 1998’s Digital Millennium Copyright Act (DMCA) further insulated them from consequences, provided they issued a takedown notice to any offending content and ban repeat offenders.
 

Bones of Contention

Julian Knight, Chair of the UK’s DCMS Committee, warned that the bill “neither protects freedom of expression nor is it clear nor robust enough to tackle illegal and harmful online content.”

Of course, it goes without saying that any users with an ounce of tech-savviness will be able to use VPNs or a private browser to circumvent blocks with ease, as we’ve seen in many countries with repressive approaches to Internet moderation.

­­­But the problems arising from the bill extend beyond its poor implementation and to the chilling effect it may have on free and open discourse online. Though the vague definitions of “harm” mean the bill can better adapt to new and emerging forms of online trouble, addressing the long-standing problem of legislation being unable to keep up with the technology it’s regulating, it equally means no-one can be entirely sure when their heated online debates may tip over into abusive, under these new rules.

The Public Order Act 1986 rules that an individual may be defended from charges of harassment if they “had no reason to believe that there was any person within hearing or sight who was likely to be caused harassment, alarm or distress.” In the online realm, it’s far harder to control who sees what you may share.

With mountains of online content and interactions being generated every day, reactive content moderation is likely to become strained and increasingly irrational, particularly when concerning ambiguous or tricky subject matters that automated systems may have trouble picking up on, as the countless examples of that problem named after a certain town in North Lincolnshire testify to.

Though this has produced some amusing situations, such as when automated filtering software prevented palaeontologists from saying “bone” during an online conference, the difficulties come with defining what content counts as “harmful,” and erring too much on the safe side may end up outright stifling any meaningful debate.
 

Won’t Somebody Think of the Children?!

While previously it was the whims of social media companies that determined what constituted unacceptable material on their platforms, now it would fall on the government.

Well-meaning but emotive appeals to “protect the children” or “prevent the spread of disinformation” mean any nuanced discussion or sensitive topics risk being unduly targeted; during the COVID pandemic, a major early Danish study on the efficacy of masks was declared to be misinformation by Facebook and censored, even though the results of the study were ultimately inconclusive, while a video of a British MP criticising vaccine passports was banned by YouTube.

The bill may also have unintended consequences for online life in Britain, discouraging business competition, or even seeing some businesses outright block UK users rather than go through the rigmarole of monitoring content on their behalf, as the many US-based websites still blocking European visitors rather than having to comply with GDPR rules can testify to.

For smaller companies, the added red tape of complying with these diktats may be more trouble than it’s worth, further narrowing command over online discourse into the hands of only the biggest tech companies.
 

Encryption friction

One of the UK government’s long-stated goals, met in part by the Online Safety Bill, is the weakening of encryption employed on messaging services, making data flows easier to intercept, raising the prospect of weakening end-to-end encryption and invasive monitoring of user’s private messages for so-called harms. However, many service providers may simply opt to block UK users, or wind up their operations, rather than comply with the new rules.
 
Despite good intentions and an obvious need to address questionable online content, with key decision-makers shockingly ill-informed as to the full consequences of their actions, the heavy-handed Online Safety Bill risks undermining national online security and the integrity of users’ communications.

Companies must consider how they can most efficiently comply with the new rules without dampening their own services or wrecking their user experience.

UPDATE [29/11/22]: The UK Government announced that measures to tackle "legal but harmful" content have been axed from the Online Safety Bill.

About the author

Adam Hughes

Cerillion

Keep up with the latest company news and industry analysis