The dilemmas lurking in deciding the extent to which public viewpoints should be moderated have lengthy histories. Whilst the media may have changed, the exercise of rights to free speech, and related expressive and associative rights, have rubbed up against not just the law but pre-existing power structures, time after time after time. The Charlie Hebdo massacre in 2015 and the death threats issued against South Park, Scandinavian cartoonists and Salman Rushdie might provide the most chilling example of this enduring part of our humanity.
The webinar Everything in moderation, including moderation covered a less horrific but equally visceral example: the liability regimes which govern the content we all consume, every day, on the world’s largest (U.S.) media platforms. Whilst Neil Brown (Director at decoded.legal and host for the session), suspected that the webinar might be the first time the SCL had covered the topic, it’s of undeniable importance to each one of us: in determining the nature and extent of the liability of media platforms for their published content, we determine the nature of the content itself via moderation, and thus what we see, hear, talk about, form views on or get outraged about.
Whilst the webinar understandably covered the topic mostly from a US/s.230 angle given the dominance of US-based big tech in the marketplace, Neil mentioned the main UK and EU analogues, namely the EU E-commerce Directives and the E-commerce Regulations in the UK (which contain the mere conduit defence/principle), the Defamation Act, the Open Internet Regulations, terrorism legislation and the evolution of court orders concerning takedowns of webpages online (in copyright or trademark infringement judgments for example).
Cathy Gellis (US Attorney, Outside Policy Counsel), first, ran through the history of the U.S. regime surrounding s.230 of the Communications Decency Act of 1996, noting that the First Amendment and the concept of prior restraint had antecedents in English law. Nevertheless, defamation could render free speech unlawful and, amongst other things, the first amendment did not render the commercial environment safe enough for the large platforms to operate in happily.
Enter s.230, which provides that no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Essentially, this law insulates the providers of interactive computer services (which includes everyone from social media platforms to email providers) from the legal repercussions of the content their users generate (whilst using their services or platforms). An important complementary provision is found in s. 230(c)(2), which makes clear that platforms will not be held liable on account of their moderation efforts “voluntarily taken in good faith to restrict access or availability of material that the provider considers to be obscene, lewd…excessively violent [the list goes on etc.]”
Overall, Cathy was clear that s. 230 aims to promote simultaneously more good stuff online (by its relieving platforms of liability, this rule should encourage a proliferation/marketplace of platforms) and less bad stuff online (by its relieving liability for certain forms of moderation). As such, it could be considered a compromise.
However, Dr Carolina Are (Visiting Lecturer, City University of London) recounted her experiences on the receiving end of Instagram moderation which seemed to belie any notion of proportionality or reasonableness. Combined measures of shadowbanning and, ultimately, the banning of her account, simply because she was part of an active pole-dancing community posting sincere content related to the art itself, betrayed gender inequalities and a form of perversely ignorant cultural monism, where small or marginal cultures are not just overlooked but treated as hostile by algorithms trained on data drawn from the dominant/majority culture.
Shadowbanning itself – as a moderation measure – seems perverse in any case. There is little to no transparency on its implementation available to the users on its receiving end and it is unlikely that users will be able to distinguish when, for example, the search bar fails to suggest their searcheable content (anymore). Dr Are legitimately compared this phenomenon to gaslighting, and when combined with automated blocking, such as for nudity recognition, this could be construed as a type of dishonest emotional manipulation operating on minority cultures.
Likewise, the standard excuse from Instagram given for their mistaken behaviours (they reinstated Dr Are after she appealed a ban), that they have too much content to moderate, does not seem genuine, especially given the different levels of treatment their moderation policies seem to give to celebrities’ content (see, for example, Kim Kardashian) versus the content of the less well-known.
Echoing Dr Are’s contention that moderation online only reflects our offline inequalities, Neil asked the question – getting to the heart of matters – of why we moderate in the first place? The answers from both Cathy and Carolina talked to the various compromises we make with one another, in order to tolerate shared spaces holding divergent viewpoints. Whilst Cathy was more optimistic that, at least in the longer-term, a combination of market forces and human freedoms should allow better successors to proliferate at the expense of the worst online arenas/squares (submitting that Facebook is a mere “blip-in-time”), Caroline noted that there remain questions over the sheer size of these platforms – if they admit to being unable to moderate so much content they why should they not be broken up into more manageably-sized entities?
All the while, it will be the employees, lawyers and technologists who advise on the implementation of these legal regimes which will effect most change at an individual level and at a content level. As Cathy remarked, lawyers internal to these organisations may just demand takedowns of content in a risk-averse manner. It is the author’s contention that, more often than it’s comfortable to acknowledge, the nuances and intricacies of legislative intent and draughtsmanship are lost in the milieu of competing business needs and time-pressured actions. And this is without acknowledging the potentially conflicting interests: for example, certain content may present business/investment risks for a platform without representing an infringement of the prevailing moderation policy.
Perhaps, therefore, the constructive conclusion is to do what Carolina does and immerse oneself in the particular subcultures in order to understand them. In this sense, the millennia-old problems demand millennia-old solutions. Education and understanding should surely help, here, and perhaps only then will a moderate form of moderation emerge as a consensus.
WATCH THE WEBINAR (log in required)
Gerald Brent is an associate in the Commercial team at Addleshaw Goddard, advising on data protection and IP issues.