We are living in an age in which there is intense pressure
from every direction to make online intermediaries – or at least the social
media platforms – police their users’ content and behaviour.
The demand is not merely to police. The European
Commission’s Communication on Tackling
Illegal Content Online, published in September 2017, wants platforms to
act as detective, informant, arresting officer, prosecutor, defence, judge,
jury and prison warder: everything from sniffing out content and deciding
whether it is illegal to locking the accused material away from public view and
making sure the cell door stays shut.
‘Accused’ is the vital word. Nowhere in the Commission’s
scheme do we find a requirement for a competent authority to determine that the
content in question is actually illegal.
Quite the opposite, in fact. The Communication kicks off by
saying that online platforms should be able to take swift decisions on action
about illegal content without a court order or administrative decision.
This stance contradicts the EU Council’s Human Rights
Guidelines on Freedom of Expression Online and Offline (2014), which speak of
the ‘need to promote international standards, including standards protecting
intermediaries from the obligation of blocking internet content without prior due
process’. The emphasis is not just on due process, but prior due process: before takedown, not after.
The Communication does not stop there. It wants platforms
to:
- apply fully automated removal where the
circumstances leave little doubt about the illegality of the material (for
example where the removal is notified by law enforcement authorities) - remove content notified by so-called trusted
flaggers without verifying legality themselves - adopt proactive measures to detect and remove
illegal content, rather than reacting to notification - use fingerprinting tools to filter out content
that has already been identified and assessed as illegal (by whom is not
specified).
At no stage in any of this, let alone prior to takedown, is
there a requirement for legality to be evaluated by a competent authority. The
nearest the Communication comes is a suggestion that in cases of difficulty
platforms could obtain third-party advice.
This approach goes against long-standing principles, forged
in the offline world, in favour of due process and against prior restraint. It
replaces those principles with a presumption of illegality: that content is
guilty because accused.
Evaluation by a competent authority is an institutional
matter. The police and similar bodies may be familiar with subject matter, but
in the offline world we do not entrust them with the power to make binding
decisions about what is and is not illegal. For good reasons we confer that
power on the courts, or other independent institutions embodying due process.
One of those reasons is that human beings – let alone
algorithms – legitimately disagree over what is and is not illegal. Especially
with speech, potential illegality is hedged around with defences. In most areas
there is no definitive right or wrong answer to the question of illegality that
can be arrived at simply by examining content. Context – background information
– is often determinative.
Many offences are vaguely defined. This is especially true
of areas such as terrorist content and hate speech, where definitions are
notoriously vague. Sometimes they are deliberately drafted over broadly,
relying on prosecutorial discretion to prevent overreach (a safeguard that is
lost in a scheme that does no more than consider the wording on the face of the
statute).
We empower courts to make decisions with which, for better
or worse, we can all abide. Why? Because courts and other independent tribunals
have the necessary social legitimacy to make binding, enforceable decisions on
questions of legality. That kind of legitimacy is something that commercial
social media platforms can never attain, however much in the way of
transparency, put-back and appeal procedures they are told to implement.
In response to the current barrage of criticism, the
platforms are falling over themselves to be seen to be doing more, and to do it
more quickly. And we now hear demands for content to be taken down within two
hours of notification.
Independent due process is a hallowed principle offline, which
we are in danger of abandoning online.
If the Commission’s Communication represents the future
direction of policy, we are moving closer to a regime in which:
- gateways are compelled to act as gatekeepers
- prior restraint – suppression either prior to
publication or prior to an independent determination of illegality – is
institutionalised - content is assumed to be guilty because accused.
We are all familiar with the refrain that what is illegal
offline is illegal online. But rather than transposing offline to online, a
regime such as that advocated by the Commission abandons commitments to
independent due process and against prior restraint that have characterised our
historic attachment to freedom of speech in the offline world. We are at risk
of building an online prior restraint machine, powered by speech suppression
engines running on ‘due-process-free fuel’, the like of which has never been
seen offline.
It was precisely to prevent that happening that the Electronic
Commerce Directive was enacted nearly 20 years ago. Far from the Directive
being obviously out-of-date or superannuated we, the users of the Internet, are
now more in need of its protections than ever before.
The issues around online illegal content deserve better than
simply repeating the mantra: ‘Here is something awful, something must be done,
Silicon Valley must do it’. In particular, we should look at practical ways of
introducing due process at source instead of shifting the responsibilities of
government onto platforms.
Of course, due process at source, like any other approach,
has to meet the challenge of scale. The perceived enormity of that challenge
does not mean we should abandon foundational offline principles that are
equally applicable online.
Graham Smith is a
partner in Bird & Bird LLP and specialises in IT, internet and intellectual
property law.
This article, which was first published on DigitalBusiness.Law (curated by Bird
& Bird) is based on a talk given to the Westminster eForum on 16 January
2018. A fuller discussion of the European Commission’s Communication on Tackling
Illegal Content Online can be found on Graham’s Cyberleagle blog here.