I’m going to spend the next few minutes explaining why I think the Government’s White Paper on Online Harms and its so-called “duty of care” is not the answer to online disinformation and why the way forward should be focused on technology and education.
Media and information law requires a careful balancing act between the protection of reputation, privacy, and human dignity on one hand, and freedom of expression and innovation on the other.
But this balancing act is often difficult and uncertain.
In 2012 Sally Bercow posted a tweet that was 7 words long: “Why is Lord McAlpine trending? *innocent face*“. It apparently caused serious harm to Lord McAlpine’s reputation by impliedly and falsely accusing him of being the mystery Tory MP accused of sex abuse. The judgment on the meaning of those words alone ran to 17 pages and the legal costs were astronomical. It was all about context. In media law, context is everything.
It’s ambitious enough for the Government to tackle unlawful content, but to extend this ambition to include legal but potentially harmful content is a giant leap too far.
There are quite a lot of reasons why this is the case but I’m going to pick just two from a litigator’s perspective.
First, misinformation is far too vague a concept with which to impose legal liability on social media platforms.
Second, any regulatory system derived from a “duty of care” is bound to be misunderstood and lead to litigation.
Legal certainty
So on my first point about legal certainty, let’s be really clear what we are talking about.
Quick summary of the White Paper
For those that haven’t read it yet, the White Paper seeks to impose a statutory duty of care on a whole range of digital players of all sizes to put in place a range of measures to protect their users from “online harms”.
These so-called online harms will be policed by a new regulator that will be responsible for drawing up codes of conduct and with powers to impose a range of sanctions, including fines, if the duty of care is not met.
The list of harms is divided into “clearly defined harms” (some of which are not clearly defined at all) and “harms with a less clear definition”.
The harms with a less clear definition are not necessarily illegal. They include “cyberbullying and trolling”, “coercive behaviour”, “intimidation” and the subject of today’s conference – “disinformation”.
Definition of misinformation
So, what do we mean by “disinformation”?
The White Paper defines “disinformation” as “spreading false information to deceive deliberately”. It defines “misinformation” as “the inadvertent sharing of false information.”
Thankfully, the White Paper does not seek to regulate misinformation, which really would be biting off more than it can chew.
But let’s just look at the definition of disinformation. It includes three elements:
- The information must be false
- It must deceive
- Such deception must be deliberate
Each one of these elements depends on context and evidence.
What did Sally Bercow’s tweet actually mean? Who did it deceive and why? Was it deliberate or inadvertent? All of these questions depended on its surrounding Twittersphere and wider media context. The same can be said for many examples of disinformation.
But the problem with disinformation, as opposed to defamation, is that even if all 3 of these vague elements are made out, the content in question may still be lawful.
So how is a group of content moderators supposed to decide whether or not to take down content that is perfectly legal?
The E-Commerce Directive limits the liability of platforms and other online intermediaries provided they remove unlawful content once they have knowledge of it.
The Government says in the White Paper that the duty of care can be applied consistently with the current framework for limitations of liability.
But although the White Paper says that individuals would have no rights of compensation or to have their complaints adjudicated, it does suggest that they would have “scope to use the regulator’s findings in any claim against a company in the courts on grounds of negligence or breach of contract.”
That to me comes dangerously close to undermining the protection of the E-Commerce Directive regime as well as suggesting a new type of negligence claim through the back door.
We need to be very careful here. There are very good reasons why the limitations of liability in the E-Commerce Directive are in place and why it would be totally wrong to make intermediary platforms potentially liable for content at the point of publication in the same way as newspapers.
The erosion of these limitations would lead to over-removal of legitimate speech, stifle innovation, and – as recognised by the Cairncross Review – reduce the availability of online news.
All of that would be to the detriment of all internet users and would be particularly damaging for the many internet start-ups that provide social media functions in the UK.
Abuse of the system
My second headline point is about abuse of the system. This follows on from the problem of uncertainty.
The proposed regulatory system does not envisage adjudicating on individual cases. But that will not stop individuals complaining and nor will it stop them trying to bring bad claims in the courts on the back of the duty of care.
There are already many complaints to regulators and the courts that don’t get past the first hurdle because they are utterly hopeless.
We need look no further than the GDPR for how a law that means well can be abused by vexatious claimants and ambulance chasing law firms.
The same will happen if we introduce a duty of care concept in relation to disinformation and other loosely defined online harms. We would almost certainly see individuals and their lawyers trying to concoct breach of contract and negligence claims on the back of regulatory findings of breach.
The absence of any liability regime is the approach taken by the Cairncross Review which recommended that although the origins of online news should be placed under regulatory scrutiny, this would only be to gather information in the first instance. No adjudications, no fines.
I should be clear that the question of liability is separate to whether social media platforms have a moral and social responsibility to help with online harms. Everyone agrees that they do, including the platforms themselves.
Which leaves the question of what we should do about disinformation?
Technology, education and awareness
I was surprised by how little of the White Paper was devoted to technology, education and awareness. There are 6 pages of it tucked away at page 77. This is where all the good stuff is, but there needs to be more of it.
Technology
The relevant platforms are already doing a lot to combat disinformation, and they know they need to keep investing in this area. Disinformation doesn’t support their advertising relationships and certainly doesn’t enhance their reputations.
But there is a perception in some quarters that the big tech players have a magic wand which they can wave at any time to find and remove unlawful content. That’s simply not the case and also fails to have regard to the many small companies that also need to grapple with these problems.
Of course, there have been developments in AI, image recognition and search. But, as I’ve said, so much of media law depends on context and machines have not yet mastered the art of distinguishing humour from malice and fact from comment.
As the technology gets better, we need to make sure there are no legal disincentives to platforms of all sizes using it and being transparent about it.
Under the current regime, both technology and media companies of all sizes can be nervous about employing people and technology to flag unlawful content before or shortly after publication. Why? Because they risk losing their neutral status and losing their limitations of liability.
As a result of this disincentive, for the last 20 years, most platforms and publishers have adopted a system of post-moderation or “notice and take-down”. Unfortunately, this system is not fast enough for some of the most harmful content.
In short, these artificial disincentives need to be put beyond doubt and removed.
Conclusion
So, to summarise:
- “Duty of Care” is the wrong concept for this proposed legislation, especially as it applies to legal harms like disinformation. The focus should be on responsible management of platforms and accountability, not liability.
- The Government should scale back its ambition to focus on what is illegal and defined, not legal and vague.
- The Government should remove the artificial disincentives that inhibit the deployment of technology solutions and real time moderation.
- And we need to improve education and awareness in schools and online.
Finally, what I will say about the White Paper is that it appears well-intentioned and has sparked debate on these important issues like never before. But there is still an awful lot of misinformation and disinformation on the subject.
The Government needs to pause and reduce its ambition to make sure that it doesn’t set a bad example for the rest of the world to follow.