How can the law regulate removal of fake news?

February 11, 2019

Written evidence of disingenuous or ‘fake news’ is as old as the cuneiform tablets of Hammurabi1. Technological solutions risk threatening freedom of speech and media pluralism. The problem has become far more visible and arguably acute online; social networks such as Facebook, Youtube and WhatsApp allow information, authentic or otherwise, to spread globally and instantly. Hildebrandt explains the scale and scope that can create disinformation problems in social media platforms:  

“Due to their distributed, networked, and data-driven architecture, platforms enable the construction of invasive, over-complete, statistically inferred, profiles of individuals (exposure), the spreading of fake content and fake accounts, the intervention of botfarms and malware as well as persistent AB testing, targeted advertising, and automated, targeted recycling of fake content (manipulation).” 2 

She warns that we must avoid the machine learning version of the Thomas self-fulfilling prophecy theorem – that “if a machine interprets a situation as real, its consequences becomes real”3. Hildebrandt explains that “data-driven systems parasite on the expertise of domain experts to engage in what is essentially an imitation game. There is nothing wrong with that, unless we wrongly assume that the system can do without the acuity of human judgment, mistaking the imitation for what is imitated”4. Some of the claims that Artificial Intelligence (AI) can ‘solve’ the problem of disinformation do just that. 

The digitisation of disinformation is blamed for skewing the results of elections and referenda and amplifying hate speech5. De Cock Buning has argued that at least in France and Italy in the period to 2018 “fake news is having a minimal direct impact. Its effect is limited mostly to groups of ‘believers’ seeking to reinforce their own opinions and prejudices”6. We agree that evidence of large-scale harm is still inconclusive in Europe, though abuses resulting from the 2016 US Presidential election and UK referendum on leaving the European Union (‘Brexit’) have recently been uncovered. The Commons Select Committee Interim Report on Disinformation and ‘Fake News’ states that “[i]n this rapidly changing digital world, our existing legal framework is no longer fit for purpose”7.

 ‘Disinformation’ refers to motivated faking of news, in line with the European institutions’ and High Level Expert Group report use of the term8. The definition of ‘fake news’ as deliberate propaganda or ‘disinformation’ was made by the regional and global United Nations rapporteurs on freedom of information in March 20179.  This was reiterated in the European Commission High Level Expert Group report of March 12, 201810.  The most recent iteration of the ‘fake news’ problem reached the West in the wake of claims of Russian interference in the 2016 U.S. presidential election and UK ‘Brexit’ referendum on leaving the European Union11.  The problem of state-sponsored social media inaccuracy was first identified in the Ukraine in 2011, when Russia was accused of deliberately faking news of political corruption12. Many studies and reports have since attempted to quantify the threat of disinformation. 

What can be done to minimise the problem? The EU-orchestrated Multistakeholder Forum industry self-regulatory Code of Practice on Online Disinformation led from the EU High Level report, and examined technology-based solutions to disinformation, focused on the actions of online intermediaries (social media platforms, search engines and online advertisers) to curb disinformation online13. Though criticised by its own Sounding Board for not stipulating any measurable outcomes,14 Kleis Nielsen argued the Code of Practice produced “three potentially major accomplishments”15

  1. Signatories commit to bot detection and identification by promising to “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”. 
  2. Submit their efforts to counter disinformation to external scrutiny by independent third party: “an annual account of their work to counter Disinformation in the form of a publicly available report reviewable by a third party”. 
  3. A joint, collaborative effort based on shared commitments from relevant stakeholders including researchers, where signatories promise not to “prohibit or discourage good faith research into Disinformation and political advertising on their platforms”16

Other EU initiatives also call for the pro-active measures by intermediaries through use of AI to aid removal of illegal content. The recently proposed EU Regulation on the Prevention of Dissemination of Terrorist Content Online17 targets rapid removal terrorist content by online intermediaries. Iterations of the Article 13 of the proposed Copyright in the Digital Single Market Directive18 suggest changing intermediary liability protections with a requirement to use filtering technologies. The European Commission has used the overarching phrase “a fair deal for consumers”19.These policy developments fit in a context, where social media platforms and search engines are increasingly scrutinised on competition grounds20 and are requested to take more responsibility for content removal.

We presented the findings of a co-authored study commissioned by the Scientific Foresight Unit of the European Parliament, which examined how governments can regulate the companies (Facebook, Youtube etc.) that themselves regulate disinformation spread on their own platforms21. We examined the effects that AI-enhanced disinformation initiatives have on freedom of expression, media pluralism and the exercise of democracy, from the wider lens of tackling illegal content online and concerns to request proactive (automated) measures of online intermediaries22, thus enabling them to become censors of free expression. In line with the recommendations of the UN Special Rapporteur on Freedom of Opinion and Expression, we call for assessments of the impact of technology-based solutions on human rights in general, and freedom of expression and media pluralism in particular23.  

Restrictions to freedom of expression must be provided by law, legitimate24 and proven necessary and as the least restrictive means to pursue the aim. The illegality of disinformation should be proven before filtering or blocking is deemed suitable. AI is not a silver bullet. Automated technologies are limited in their accuracy, especially for expression where cultural or contextual cues are necessary. The illegality of terrorist or child abuse content is far easier to determine than the boundaries of political speech or originality of derivative (copyrighted) works. We should not push this difficult judgement exercise in disinformation onto online intermediaries, who are inexpert in judging media pluralism and fundamental rights. 

If the socio-technical balance is trending towards greater disinformation, a lack of policy intervention is not neutral, but erodes protection for fundamental rights to information and expression. While there remains insufficient research to authoritatively conclude that this is the case, it is notable that after previous democratic crises involving media pluralism and new technologies (radio, television, cable and satellite), parliaments passed legislation to increase media pluralism by, for instance, funding new sources of trusted local information (notably public service broadcasters), authorising new licencees to provide broader perspectives, abolishing mandatory licensing of newspapers or even granting postage tax relief for registered publishers, and introducing media ownership laws to prevent existing monopolists extending their reach into new media25. While many previous media law techniques are inappropriate in online social media platforms, and some of these measures were abused by governments against the spirit of media pluralism, it is imperative that legislators consider which of these measures may provide a bulwark against disinformation without the need to introduce AI-generated censorship of European citizens. 

Different aspects of the disinformation problem merit different types of regulation. We note that all proposed policy solutions stress the importance of literacy and cybersecurity. Holistic approaches point to challenges within the changing media ecosystem and stress the need to address media pluralism as well. Further, in light of the European elections in May 2019, attention has gone to strategic communication and political advertising practices.

Who should regulate fake news shared online? The job of regulating fake news should not fall solely on national governments or supranational bodies like the European Union. Neither should the companies be responsible for regulating themselves. Instead, we favour “co-regulation”. Co-regulation means that the companies regulate their own users, but co-regulation requires that they will demonstrate the ability to regulate fake news. It is threatening the platforms with action if they do not engage in proper regulation themselves26 .

Can Artificial Intelligence (AI) solve the fake news problem? One argument being put forward by the owners of online platforms is that new technologies can solve the very problems they create. Chief among those technologies is machine learning or AI, alongside user reporting of abuse. However the notion that AI is a ‘miracle cure’, the panacea for fake news, is optimistic at best. We argue that while AI can be useful for removing disinformation once it has been spotted, identifying it in the first place requires human analysis. This is especially true when national and cultural subtleties are involved.

Over time, AI solutions to detect and remove illegal/undesirable content have become more effective, but they also raise questions about who is the ‘judge’ in determining what is legal/illegal, and desirable/undesirable in society. Underlying AI use is a difficult choice between different elements of law and technology, public and private solutions, with trade-offs between judicial decision-making, scalability, and impact on users’ freedom of expression. 

Limiting the automated execution of decisions on AI-discovered problems is essential in ensuring human agency and natural justice: the right to appeal. That does not prevent the suspension of ‘bot’ accounts at scale but ensures the correct auditing of the system processes deployed. If inadequate for natural language processing and audiovisual material including so-called ‘deep fakes’ (fraudulent representation of individuals in video), AI does have more reported success in identifying ‘bot’ accounts: “So-called ‘bots’ foment political strife, skew online discourse, and manipulate the marketplace”27. Technical research into disinformation has followed several tracks:

  • identifying and removing billions of bot as distinct from human accounts28
  • identifying the real world effects of Internet communication on social networks29
  • assessing the impact of disinformation via media consumption and electoral outcomes30
  • researching security threats from disinformation; 
  • researching discrimination and bias in the algorithms used to both propagate and increasingly to identify and/or disable disinformation31

The UK Parliament AI Committee reported on some of these issues in 201732. There are an enormous number of false positives in taking material down. Human intervention is necessary to analyse these false positives, that could lead to over-censorship of legitimate content that is machine-labelled incorrectly as disinformation.

Online disinformation consumption includes that of video news and newspapers, whose readerships have largely migrated online33, but also images and amateur montages of video (‘deep fakes’) that are far harder to detect as disinformation. Textual analysis of Twitter or news sites can only explore the tip of the iceberg of disinformation, as video and images are much more difficult to examine comprehensively. Only a partial view of AI effectiveness exists outside corporate walls: 

“Facebook says its AI tools—many of which are trained with data from its human moderation team—detect nearly 100 percent of spam, and that 99.5 percent of terrorist-related removals, 98.5 percent of fake accounts, 96 percent of adult nudity and sexual activity, and 86 percent of graphic violence-related removals are detected by AI, not users.”34 

This level of AI removals sounds impressive, though these are unaudited company claims, but Facebook’s AI detects: “just 38 percent of the hate speech-related posts it ultimately removes, and at the moment it doesn’t have enough training data for the AI to be very effective outside of English and Portuguese”35. In 2018, researchers have claimed that trained algorithmic detection of fact verification may never be as effective as human intervention, with serious caveats (each has accuracy of only 76%): “future work might want to explore how hybrid decision models consisting of both fact verification and data-driven machine learning judgments can be integrated”36. This is a sensible approach where resources allow for such a wide spectrum of solutions.

AI cannot be the only way to regulate content in future37. Subcontracting to people on very low wages in locations other than Europe is a great deal cheaper than employing a lawyer to work out whether there should be an appeal to put content back online. The current incentive structure is for platforms to demonstrate how much content they have removed, when a very important factor may be examples of successful appeals to ‘put back’ legitimate content online38. Content moderation at scale still needs human intervention to interpret AI-flagged content.

Veale, Binns and Van Kleek explain how to move beyond transparency and explicability to replicability: to be able to run the result and produce the answer that matches the answer they have39. The greater the transparency, the greater the amount of information you give to users, the less the degree to which that help is limited. Users are told that if they do not agree to the effectively unilateral Terms of Service, they can no longer use the service. Transparency and explanation is necessary, but it is a small first step towards better regulation40. A satisfactory solution to algorithmic transparency might be the ability to replicate the result that has been achieved by the company producing the algorithm. Algorithms change all the time: there are good reasons to keep them trade secrets. Replicability would be the ability to look at the algorithm in use at the time and, as an audit function, run it back through the data to produce the same result. It is used in medical trials as a basic principle of scientific inquiry. It would help to create more trust in what is otherwise a black box that users and regulators simply have accept. 

Fighting fake news does have a cost. Unless we engage European citizens to work independently on behalf of these companies – and this will be unpopular because this is expensive – we cannot solve this problem in Europe. What Europe, whether as a bloc or among its constituent national governments, needs to do is to make sure that what companies do is engage European fact-checkers to work with their AI programmes to properly resource their own attempts to stop fake news. We also need European lawyers to work on appeals. Executives in California, ex-politicians such as Nick Clegg, or thousands of zero hours contractors hired off the internet, from the Philippines or India, cannot regulate European fake news: it has to be Europeans. They must have some kind of training in journalism and European human rights law to make judgements on journalistic opinion and freedom of expression.

While it would appear to be in the platform owners’ best interests to reduce the dissemination of disinformation, the means of doing so could prove to be a sticking point. As ever, it comes down to a question of money. The platforms are going to claim wonderful, marvellous results from Artificial Intelligence because it is much cheaper to employ Artificial Intelligence to solve the fake news problem than it is to employ enough humans to solve the problem in addition to machine learning. Platforms claim they can use AI in order to solve this problem, without many trained lawyers and journalists are employed. The reality is, the only accurate way to deal with fake news is to have a hybrid model of trained humans working on problems that AI has identified. Humans have to make the value judgements. That is expensive for Facebook and Youtube, but absolutely essential to accuracy. They will only make those investments in qualified European values, fact-checkers and fake news spotters if forced to do so by governments.

Does the evidence support any further legal intervention to control disinformation? First, note that the evidence base is growing rapidly in early 2019, and there is strong recent evidence that electoral outcomes have been affected by online disinformation. Second, the Information Commissioner is engaged in auditing the activities of the Brexit campaigners in the 2016 UK referendum, having issued £120,000 fines on 1 February for three separate illegal uses of personal data41. Third, there remain significant questions about Online Behavioural Advertising (OBA), in electoral periods, as a campaign tool more widely, and as an effective and appropriate use of personal information more broadly under the GDPR.  We have just begun the process of regulating online disinformation and its uses in our democracies.

professor chris marsden

Professor Chris Marsden, University of Sussex

Dr Trisha Meyer

 Dr Trisha Meyer, Vrije Universiteit Brussel

———————-

1. Discussed in Enriques, L. (9 Oct 2017) ‘Financial Supervisors and RegTech: Four Roles and Four Challenges’, Oxford University, Business Law Blog, http://disq.us/t/2ucbsud 

2. Hildebrandt, M. (2018) ‘Primitives of Legal Protection in the Era of Data-Driven Platforms’, Georgetown Law Technology Review 2(2) at p. 253 footnote 3

3. Merton, R.K. (1948) ‘The Self-Fulfilling Prophecy’, The Antioch Review 8(2), 193-210.

4. The imitation game is often known as the Turing test, after Turing, A.M. (1950) ‘Computing Machinery and Intelligence’, Mind 49, 433-460.

5. Euronews (9 Jan 2019) How Can Europe Tackle Fake News in the Digital Age?, https://www.euronews.com/2019/01/09/how-can-europe-tackle-fake-news-in-the-digital-age 

6. de Cock Buning, M. (10 Sept 2018) ‘We Must Empower Citizens In The Battle Of Disinformation’, International Institute for Communications, http://www.iicom.org/themes/governance/item/we-must-empower-citizens-in-the-battle-of-disinformation 

7. UK House of Commons Select Committee on Media, Culture and Sport (2018) Interim Report on Disinformation and ‘Fake News’, https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/363/36302.htm

8. Bentzen, Naja (2015) Understanding propaganda and disinformation, European Parliament Research Service, http://www.europarl.europa.eu/RegData/etudes/ATAG/2015/571332/EPRS_ATA(2015)571332_EN.pdf

9. U.N. Special Rapporteur on Freedom of Opinion and Expression et. al., Joint Declaration on Freedom of Expression and “Fake News,” Disinformation and Propaganda, U.N. Doc. FOM.GAL/3/17 (Mar. 3, 2017), https://www.osce.org/fom/302796?download=true 

10. High Level Expert Group on Fake News and Online Disinformation (2018) Report to the European Commission on A Multi-Dimensional Approach to Disinformation, https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation

11. See Robert Epstein & Ronald E. Robertson, The Search Engine Manipulation Effect (SEME) and its Possible Impact on the Outcomes of Elections’ 112 PROC. NAT’L ACAD. SCI. E4512 (2015) (describing search engine manipulation for election interference). 

12. See Sergey Sanovich, Computational Propaganda in Russia: The Origins of Digital Misinformation (Oxford Computational Propaganda Research Project, Working Paper No. 2017.3, 2017), http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/06/Comprop-Russia.pdf 

13. EU Code of Practice on Disinformation (2018) https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

14. European Commission (26 September 2018) Code of Practice on Disinformation, Press Release, https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation 

15. Nielsen, R.K. (24 Oct 2018) ‘Misinformation: Public Perceptions and Practical Responses’, Misinfocon London, hosted by the Mozilla Foundation and Hacks/Hackers, https://www.slideshare.net/RasmusKleisNielsen/misinformation-public-perceptions-and-practical-responses/1 

16. Nielsen, R.K. (26 Sept 2018) Disinformation Twitter Thread, https://twitter.com/rasmus_kleis/status/1045027450567217153

17. Proposed EU Regulation on Prevention of Dissemination of Terrorist Content Online (COM(2018) 640 final – 2018/0331 (COD)) https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-preventing-terrorist-content-online-regulation-640_en.pdf 

18. Proposed EU Directive on Copyright in the Digital Single Market (COM(2016) 593 final – 2016/0280(COD)) https://ec.europa.eu/digital-single-market/en/news/proposal-directive-european-parliament-and-council-copyright-digital-single-market

19. Vestager, M. (2018) ‘Competition and A Fair Deal for Consumers Online’, Netherlands Authority for Consumers and Markets Fifth Anniversary Conference, 26 April 2018, The Hague, https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/competition-and-fair-deal-consumers-online_en 

20. For a scholarly overview and discussion of ongoing platform and search engine competition cases, see Mandrescu, D. (2017) ‘Applying EU Competition Law to Online Platforms: The Road Ahead – Part I’, Competition Law Review 38(8) 353-365; Mandrescu, D. (2017) ‘Applying EU Competition Law to Online Platforms: The Road Ahead – Part II’, competition Law Review 38(9) 410-422. For an earlier call to co-regulation, see Marsden, C. (2012) ‘Internet Co-Regulation and Constitutionalism: Towards European Judicial Review’ International Review of Law, Computers & Technology 26(2-3) 215-216

21. https://epthinktank.eu/author/stoablogger/

22. ACR techniques became newsworthy in 2016 with the development of eGLYPH for removal of terrorist content: see The Verge (2016) Automated Systems Fight ISIS Propaganda, But At What Cost?, https://www.theverge.com/2016/9/6/12811680/isis-propaganda-algorithm-facebook-twitter-google 

23. UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (2018) Report to the United Nations Human Rights Council on A Human Rights Approach to Platform Content Regulation, A/HRC/38/35, https://freedex.org/wp-content/blogs.dir/2015/files/2018/05/G1809672.pdf

24. Pursue one of the purposes set out in Article 19.3 International Covenant on Civil and Political Rights, i.e. to protect the rights or reputations of others; to protect national security, public order or public health or morals.

25. See e.g. C-288/89 (judgment of 25 July 1991, Stichting Collectieve Antennevoorziening Gouda and others [1991] ECR I-4007); Protocol on the System of Public Broadcasting in the Member States annexed to the EC Treaty; Council Directive 89/552/EEC on the Coordination of Certain Provisions Laid Down by Law, Regulation or Administrative Action in Member States concerning the Pursuit of Television Broadcasting Activities (particularly its seventeenth recital)

26. Marsden, C. (2012) ‘Internet Co-Regulation and Constitutionalism: Towards European Judicial Review’, International Review of Law, Computers and Technology 26(2) 212-228

27. Lamo, M. and Calo, R. (2018) ‘Regulating Bot Speech’, UCLA Law Review 2019, http://dx.doi.org/10.2139/ssrn.3214572 

28. Gilani, Z., Farahbakhsh, R.,  Tyson, G., Wang, L., and Crowcroft. J. (2017) ‘Of Bots and Humans (on Twitter)’, in ASONAM ’17 Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 349-354; Perez, B., Musolesi, M., and Stringhini, G. (2018) ‘You are Your Metadata: Identification and Obfuscation of Social Media Users using Metadata Information’, ICWSM.

29. Including the ‘Dunbar number’ of friends that can be maintained, which has not measurably increased with the Internet: Dunbar, R. I. M. (2016) ‘Do Online Social Media Cut Through the Constraints that Limit the Size of Offline Social Networks?’, Royal Society Open Science 2016(3), DOI: 10.1098/rsos.150292. Quercia, D., Lambiotte, R., Stillwell, D. Kosinski, M., and Crowcroft, J. (2012) ‘The Personality of Popular Facebook Users’, in Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (CSCW ’12), pp. 955-964, https://doi.org/10.1145/2145204.2145346

30. Zannettou, S. et al. (2018) Disinformation Warfare: Understanding State-Sponsored Trolls on Twitter and Their Influence on the Web, arXiv:1801.09288v1

31. Alexander J., and Smith, J. (2011) ‘Disinformation: A Taxonomy’, IEEE Security & Privacy 9(1), 58-63, doi: 10.1109/MSP.2010.141; Michael, K. (2017) ‘Bots Trending Now: Disinformation and Calculated Manipulation of the Masses [Editorial]’, IEEE Technology and Society Magazine 36(2), 6-11, doi: 10.1109/MTS.2017.2697067

32. UK House of Lords (2017) AI Select Committee: AI Report Published  https://www.parliament.uk/business/committees/committees-a-z/lords-select/ai-committee/news-parliament-2017/ai-report-published/ (note the report is published in non-standard URL accessed from this link)

33. Nielsen, R.K. and Ganter, S. (2017) ‘Dealing with Digital Intermediaries: A Case Study of the Relations Between Publishers and Platforms’, New Media & Society 20(4), 1600-1617, doi: 10.1177/1461444817701318

34. Koebler, J., and Cox, J. (23 Aug 2018) ‘The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People’, Motherboard, https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-moderation-works

35. Koebler and Cox (2018) supra n.31

36. Perez-Rosas, V., Kleinberg, B. Lefevre, A. and Mihalcea, R. (2018) Automatic Detection of Fake News, http://web.eecs.umich.edu/~mihalcea/papers/perezrosas.coling18.pdf 

37. Schaake, M. (2018) ‘Algorithms Have Become So Powerful We Need a Robust, Europe-Wide Response’, The Guardian https://www.theguardian.com/commentisfree/2018/apr/04/algorithms-powerful-europe-response-social-media 

38. Google (2018) YouTube Transparency Report, https://transparencyreport.google.com/youtube-policy/overview 

39. Veale, M., Binns, R., and Van Kleek, M. (2018) ‘The General Data Protection Regulation: An Opportunity for the CHI Community? (CHI-GDPR 2018)’, Workshop at ACM CHI’18, 22 April 2018, Montreal, arXiv:1803.06174

40. Edwards, L. and Veale, M. (2017) Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not the Remedy You are Looking for, https://ssrn.com/abstract=2972855. Erdos, D. (2016) ‘European Data Protection Regulation and Online New Media: Mind the Enforcement Gap’, Journal of Law and Society 43(4) 534-564, http://dx.doi.org/10.1111/jols.12002

41. ICO (1 Feb 2019) ICO to Audit Data Protection Practices at Leave.EU and Eldon Insurance after Fining Both Companies for Unlawful Marketing Messages, https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2019/02/ico-to-audit-data-protection-practices-at-leaveeu-and-eldon-insurance-after-fining-both-companies-for-unlawful-marketing-messages