Interoperability: A solution to regulating AI and social media platforms

September 18, 2019

Introduction

In the space of a few short years, Artificial Intelligence (AI) has leapt from the pages of science fiction to become a part of our everyday lives. Digital Era governance means decisions are taken about our lives by both government and private actors by their AI systems that can have potentially profound implications1. AI technologies “aim to reproduce or surpass abilities (in computational systems) that would require intelligence if humans were to perform them”2. AI is an advanced deployment of Machine Learning (ML). AI is the latest iteration of Machine Learning but is a very early stage of any defined Artificial General Intelligence that may achieve the hypothetical ‘singularity’ of self-consciousness as made infamous over fifty years ago by Stanley Kubrick’s cinematic interpretation of Arthur C. Clarke’s HAL90003. ML is in some respects a subset of human-computer interaction (HCI): that is algorithms applied to (big) data to aid human decisions.

AI is already deployed in ways that we may not even be aware of with incidents of abuse of that data reported daily. In this article, we argue there is a better, broad way to prevent abuse: interoperability. “Computer says no” cannot be the final answer to our quest for justice in such decisions. We argue that what is needed most urgently is a remedy to dominant consumer-facing platforms deploying AI in non-transparent systems. AI is being used in many systems, with little to no transparency, from facial recognition cameras in public spaces to removal of ‘fake news’ from social media platforms, yet consumers have no visibility of these technologies nor remedy if their rights are potentially infringed.In our view the answer is not just a temporary dose of transparency, which may not be feasible or even desirable4, but an interoperability remedy that lets regulators and potential rivals see inside the ‘black box’ to judge the AI for themselves5. There is a caveat: regulation may not be suitable, appropriate or feasible for many algorithms but for those that regulators have most concern about, in sectors that provide the most sensitive socioeconomic decisions, it is a remedy that can be explored. Sensitive public facing sectors may include: banking/credit, insurance, healthcare & medical research, social care, policing and security, education, transport (AI-guided airliners & automated vehicles), social media, telecommunications6. This is a non-exclusive list that may be altered by emerging public techno-socio-policy concerns.

How is AI governed in practice?

At present, AI is largely governed through self-regulation and the technology giants, including the GAFAM/FAANG platform operators7, appear set on persuading us that self-regulation remains the only effective route to legal accountability for machine learning systems. Such an attitude jeopardises the sustainable introduction of smart contracts, permitting algorithmic discrimination and compromising the implementation of privacy law8.

Recent public policy focus on digital decision-making has led to a wider debate about computer-aided adjudication. Legal focus has exposed discrimination that occurs in machine learning parsed into their interaction9. Discriminatory data is likely to lead to discriminatory results. Discriminatory algorithms – as well as those not designed to filter out discrimination – can make those results more discriminatory. Justice requires that lawyers study algorithmic outcomes in order to ascertain such discrimination, which may be highly inefficient as well as outrageous to natural justice and fundamental rights. Public administration has generic solutions. Administrative law requires natural justice, or at least, ‘reasonableness’. A right to explanation and / or remedy should apply, and anti-discrimination law also applies to corporate decisions. AI decision making has raised the question: is the decision maker AI or human? 

The case of UK Visa applications  demonstrates that AI is not a trustworthy contributor to what was already never a happy or exact science. The UK government minister (at the time of writing) claimed that use of AI in visa applications was acceptable as humans made the final decision: “Sifting is not decision making”10. The Council of Europe in principle disagrees: while to err is human, inducing AI complexity does not absolve the operator of responsibility for harms11.

Our focus in this article is on the private activities of private companies, particularly in networked industries that affect consumers at scale. We now have a variety of pro-consumer/citizen laws that extend rights and obligations far beyond classical freedom of contract, including: anti-discrimination and equality laws; financial regulation; consumer contract law; and telecommunications regulation. Specialist technology law is deployed in many fields that now make up the Information Society: biomedical/nanotechnology deployment; railways, roads, and telecoms; data protection12. Judges may solve problem in tort/contract, though this took 100 years in case of railways litigation, and it would require many technologically savvy judges, and a large number of leading cases in common law jurisdictions to achieve the same outcome. In contrast, the largest civil law system, European Union consumer law, is pressing ahead with legislation to combat AI injustice before the end of 2019, President-elect Von der Leyen stating: “In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence.”13 The new President also promised a new Digital Services Act to regulate large digital platforms. Our proposed solution will approach both issue areas coherently.

Transparency, replicability and general data protection are incomplete solutions to AI 

Transparency is the first requirement of legal recourse (though some algorithms can be reverse engineered without transparency “under the hood” of the machine). It is not sufficient, however, for several reasons. Claims that the ability to study an algorithm and its operation provides a remedy for users who suffer as result of decisions falls short for one simple reason: both the training data and the algorithm itself will change constantly. For instance, it is impossible to forecast real time outcomes of Google searches; a vast Search Engine Optimization business attempts approximations without complete accuracy. The only remedy that can be achieved is replicability – taking an ‘old’ algorithm and its data at a previous point in time to demonstrate whether the algorithm and data became discriminatory. This is an incomplete a remedy as it in effect it uses a ‘slow motion replay’ while the game rushes onwards. 

Wagner argues for the need for systematic redress by an external agency to instil confidence in AI decision making14. He uses AI deployment case studies to illustrate the point: self-driving cars, police searches using social media/Passenger Name Records, Facebook content moderation. All require minimal regulation for the public to get some trust in using these technologies (some of which are compulsory to use services or even enter countries). ‘Ethics washing’ is undertaken by technology companies and their professional advisors, where attempts are made to persuade policy makers that self-regulation is the only effective route to legal accountability for Machine Learning systems15. If this means the public distrusts AI and any system claiming to use AI, it may be jeopardizing the sustainable introduction of smart contracts, permitting algorithmic discrimination and compromising implementation of data protection law. Regulators are wise to these tricks. Ethics washing will fail16. Cursory research into history of communications regulation and Internet law demonstrates the falsity of this self-regulation proposition17.

The EU right to data portability (“RTDP”) under the GDPR18  might be seen as a partial solution to combat market concentration in EU. The current version of RTDP might be too limited, as portability only applies when the data subject herself provided the data, yet data is often a shared service with multiple owners and creators (consider a selfie photo of best friends, posted by both online in separate accounts with separate tags and hashtags). Further, it cannot be a general instrument of economic policy in digital markets, as data is “unlocked” solely if the data subject invokes RTDP under GDPR.19Edwards and Veale indicate RTDP is not enough and “regulation to promote true interoperability is vital”.20

Competition or communications/media regulation: What can and should be done? 

Interoperability enables more free data flow, an essential but not sufficient input for data-driven innovation21. Open and interoperable standards can help to increase competition in digital markets. UK’s Open Banking Standards, designed to enhance competition in the banking sector by enabling fintech entrepreneurs entry to market, could be an appropriate example.22  However, interoperability will not always leads to more innovation and competition.23  Interoperability through uniform standards and interfaces, might limit companies development of their own innovative goods and services with specific components since they have to comply with the requirements of interoperability.24  Implementation of a maximum level of interoperability could also cause privacy harms. If technical and consumer control mechanisms are not well designed, interoperability might increase the risk of misuse of personal data due to multiple service providers access to user’s personal data. Therefore, open and interoperable standards should avoid overstandardization and serve pro-competitive goals.25

We therefore suggest three regulatory options for consumer-deployed AI regulation, though we only propose two should be made operational.

1. Ethical standards for all AI deployed in the ‘wild’ to the public. ISO standards should be implemented with basic privacy/human rights impact assessment.

2. Interoperability for public communications providers – Instant Messaging/Search/Social Media companies

3. API (Application Programming Interface) opened to dominant (Significant Market Power: SMP) operators. This is based on Microsoft remedies in longest, most expensive antitrust case in EC history: a case which started in 1993 and whose remedies, imposed in 2004, only expired at the end of 2014. The later Google antitrust case, started in 2009, is ongoing a decade later26.

Ethical standards for all AI deployed in ‘wild’ – to public

An industry standard could be a baseline for deploying sensitive technologies with cybersecurity and human rights impacts. ISO standards are being formed, and can be quite powerful influencers (see ISO27001 on cybersecurity for example). Typically technical engineering is a realm not considered suitable for normative standards.

However standards embedded in national laws can become a weak coregulatory signal. Basic privacy/human rights impact assessment has been proposed by UN Rapporteur Prof. David Kaye, and AI impact assessment suggested by Mantelero for the Council of Europe27. Standards Australia is chairing an ISO Working Party28

More broadly, ethics standards for AI deployment have been suggested by many organisations. The European Union29 & OECD Guidelines may receive the widest acceptance30. Many other guidelines exist, such as: the US 2019 Executive Order on AI; UK Centre for Data Ethics and Innovation (CDEI) at Turing Institute31. Hosanagar advocates the creation of an independent Algorithmic Safety Board, modelled on the Federal Reserve Board32.

Why interoperate?

Connectivity and communication are an essential part of contemporary life whether it be individuals using social media or telecommunications, businesses interacting with one another or across government departments. Interoperability at its most basic level can be defined as the ‘ability of two or more systems or components to exchange information and to use the information that has been exchanged.’33

Interoperation is driven by economics: there is nothing less valuable than a network with one user. Interoperability results in increased value of several networks and promotes efficient investment in and use of infrastructure. It permits new entrants to compete with existing operators and promotes entry. Network effects of interoperability are based on a heuristic called Metcalfe’s law. Metcalfe hypothesised that while the cost for the network to grow the number of connections is linear, its value would be proportional to the square of the number of users.34  The users and operators of each network gain according to more users of that network, and lose where users switch away to a more popular network.

There are social benefits of interoperability. It eliminates the consumer need to acquire access to every network or the tendency to a winner-takes-all outcome. This is inelegant from a device design perspective too: readers may remember when the US had different mobile design standards to the EU (CDMA rather than GSM). In Instant Messaging (IM), arguably the winner-takes-all is Facebook/WhatsApp/Instagram without interoperability – with all IMs inside the corporation becoming interoperable35.

Interoperability can be divided into technical or non-technical. Technical interoperability includes communications, electronic, application, and multi-database interoperability whilst non-technical interoperability includes organisational, operational, process, cultural and coalition interoperability.

Regulatory intervention can be applied to either but  addressing the technological aspects of interoperability provides predictable regulation.

Interoperability option for public communications providers (PCPs)

Interoperability is not radical as a regulatory requirement. It is required for broadcasters to enable Electronic Programme Guides (EPGs), and telecoms companies for telephone numbering schemes. Co-regulatory standards are often used. A PCP interoperability proposal would not regulate public communications providers as utilities but as media providers, and this is not common carrier regulation nor equivalent to energy/postal providers. It is intended to regulate operators as printers, not publishers, with primary content liability remaining with individual user/authors. We note that attempts to impose ‘Duty of Care’ fiduciary in the UK and the US are highly inappropriate and anomalous to the entire history of Internet and analogue free speech and content regulation36.

Not all PCPs will wish to interoperate, not least because the large platform PCPs have been found to have insecure communications and compromised protocols, so smaller PCPs may refuse to interoperate even were the option available. A good example is data security and minimalization philosophy deployed by the founder of Signal (Cryptographer and Open Whisper Systems founder Moxie Marlinspike), a perspective that is shared in part by Telegram37. The PCP interoperability option can therefore only be adopted towards specific dominant operators, not all PCPs, without compromising cybersecurity innovation and the freedom of choice of individual users.

Opening Dominant operators’ APIs

Opening up the API enables brokers, comparator programmes, regulators to access algorithms in real time & controlled conditions, in order to observe the algorithm’s behaviour. Where an operator is found to be dominant, interoperability could be applied as a consumer remedy, not a competition one. EU Commissioner Vestager recently described her policy on interoperability and large platforms:

“Making sure that products made by one company will work properly with those made by others – can be vital to keep markets open for competition. Microsoft’s takeover of LinkedIn approval depended on agreement to keep Office working properly, not just with LinkedIn, but also with other professional social networks. The Commission will need to keep a close eye on strategies that undermine interoperability”38.

Recently, in a contested decision, the Australian ACCC found dominance by Facebook and Google39. Interoperability would only apply to platform aspects of their business, for example mobile app stores not Apple or Android phones. Three models have been proposed:

Model 1: Must-carry obligations, as used for regulating EPGs

Model 2: API disclosure requirements, as with Microsoft from EC rulings40.

Model 3: Interconnect requirements, which are applied to telecoms, especially operators with SMP41. Interoperability can be separated into three types, as identified in a recent study for DG Competition42:

  • Protocol interoperability: this provides the ability of services/products to interconnect technically. It is the ‘usual’ from of interoperability seen in competition policy, as between the Microsoft Windows operating system and the APIs of Internet browsers such as Firefox and Chrome.
  • Data interoperability: Recalling Mayer-Schonberger/Cukier and their remedy to ‘Big Data’ monopolists in their eponymous book, this would provide a slice of data to competitors43.
  • Full protocol interoperability, is what telecoms regulators often think of as full interconnection.

In principle, providing access to APIs is likely to be in the best interest of the service provider. That is, the provider gets the same network effect advantage set out above. However, if a service provider with SMP chooses to make an API private, this may represent a barrier to entry. If a service provider with SMP chooses not to make an API available, this may also represent a barrier to entry. If either of these conducts has the potential to substantially lessen competition, then an ex ante access regime to an API is a potential regulatory solution.

The requirements for such an access regime would be consistent with usual practice associated with either essential facilities or bottlenecks in networked industries. However, there will need to be slight differences in the regime, depending on whether access is to an otherwise private API or to an API that was required to be created as part of the ex ante regulation. The regulatory language required to impose the API obligation is similar to that required in telecommunications. The API provider is referred to as the access provider and the person seeking to use the API is referred to as an access seeker. As such, a preliminary stage of the ex ante regulation might well be to have a regime in which an access provider can make a standing API access offer by having either a public or private API to which access is offered on a non-discriminatory basis where the terms and conditions of access are set out in a Standard API Access Agreement (SAAA). The SAAA would form an offer, capable of acceptance by any member of a class of those qualified to become access seekers.

If there is no such SAAA, then the regulatory access obligation would be in the form set out below.

If the access provider has an API, then the access provider must, if requested to do so by an access seeker:

(a) supply access to the API to the access seeker;

(b) take all reasonable steps to ensure that the technical and operational quality of the API supplied to the access seeker is equivalent to that which the access provider provides to itself; and

(c) take all reasonable steps to ensure that the access seeker receives, in relation to the API, fault detection, handling and rectification of a technical and operational quality and timing that is equivalent to that which the access provider provides to itself.

If the access provider has created an API, then the access provider must, if requested to do so by an access seeker:

(a) supply access to the API to the access seeker; and

(b) take all reasonable steps to ensure that the access seeker receives, in relation to the API, equivalent technical, operational and data access outcomes to those that the access provider provides to itself.

The price of access to an API would usually be based on a building block model approach. In any case, it should return a normal profit to the access provider based on that access provider’s weighted cost of capital. There may be a requirement to provide a safety net set of non-price access terms and conditions in the absence of a SAAA.

Conclusion From Interoperability for Social Media Platforms Deploying AI to Broader Remedy? 

We have explained in this article that AI is too dynamic an environment for transparency and replicability to provide a comprehensive solution for users who have suffered injustices. To really help the regulatory environment work in the public interest, we need to introduce interoperability for users and regulators to see ‘inside the black box’ of AI decision makers. Interoperability is not radical as a regulatory requirement and is required for broadcasters and telecoms companies to enable EPGs and telephone numbering schemes respectively. Co-regulatory standards are often used. This proposal would not regulate public communications providers as utilities but as media providers, and this is not common carrier regulation nor equivalent to energy/postal providers. It is intended not to regulate operators as publishers but as printers, with primary content liability remaining with individual user/authors. We are agnostic as to the location of an ‘interoperability regulator’ beyond noting that the deployment of AI is predicted to become so widespread throughout socio-economic arenas that a generic regulator may rapidly be more useful than a communications specific regulator. More research is needed as to whether ‘Ofcom’ should be supplanted or supplemented by ‘OffData’44.

Many research questions for digital competition remain. Interoperability is extensively used in sectors with which we are most familiar. Is this interoperability remedy more broadly applicable? Can self-driving vehicles or banking, insurance, medical algorithmic ‘AI’ be regulated using interoperability? It depends on a variety of socio-economic factors. Many sectors have regulators working on ‘regulatory sandpit’ solutions.

The UK Furman Review for the Treasury of Digital Markets discussed the ‘data mobility’ remedy45. Can the Consumer Data Right be deployed to deliver open banking, open energy and open telecoms? Many digital competition regulators have become very excited about the application of the model to their own networked industries but the precise interoperability solutions remain to be tested and evaluated. This paper is a short provocation as to the path forwards.

Professor Chris Marsden, University of Sussex

Dr Rob Nicholls is a senior lecturer in business law at the UNSW Business School and the director of the UNSW Business School Cybersecurity and Data Governance Research Network

———————————–

Notes and sources

1 Margetts, Helen and Dunleavy, Patrick (2013) The second wave of digital-era governance: a quasi-paradigm for government on the Web, Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences VL – 371 at http://rsta.royalsocietypublishing.org/content/371/1987/20120382.  bstract Dunleavy and Margetts (2010) The Second Wave of Digital Era Governance, APSA 2010 Annual Meeting Paper, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1643850.  See also Laird, L.(2016) As Governments Open Access to Data, Law Lags Far Behind, ABA Journal, at http://www.abajournal.com/news/article/as_governments_open_access_to_data_law_lags_far_behind

2 UK Engineering and Physical Sciences Research Council (2015) EPSRC Future Intelligent Technologies (FIT) Workshop Report, at https://epsrc.ukri.org/newsevents/pubs/fitworkshopreport/

3 For details of the cinematic interpretation, see Ordway III, Frederick Ira (1982) ‘2001: A Space Odyssey in Retrospect’ In Eugene M. Emme (ed.) American Astronautical Society History. Science Fiction and Space Futures: Past and Present pp.47–105. ISBN 0-87703-172-X

4 See Andrew D. Selbst and Julia Powles, ‘Meaningful information and the right to explanation’ (2017) 7 (4) International Data Privacy Law 233.

5 Lilian Edwards, Michael Veale, ‘Slave to the algorithm? Why a ’right to an explanation’ is probably not the remedy you are looking for’ (2017) 16  (1) Duke Law & Technology Review 18; Lilian Edwards, Michael Veale, ‘Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”?’ (2018) 16 (3) IEEE Security & Privacy 46; Lilian Edwards, Michael Veale, ‘Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling’ (2018) 34 (2) Computer Law & Security Review 398

6 There are multiple examples of algorithms that fail in each category, including deaths caused by AVs in both road and air transportation, racially biased policing, unlawfully discriminatory health or life insurance, mortgage applications refused on spurious grounds, and so on. See generally Pasquale, Frank A (2006) Rankings, Reductionism, and Responsibility, Seton Hall Public Law Research Paper at http://ssrn.com/abstract=888327; Pasquale, Frank A (2015) Black Box Society: The Secret Algorithms That Control Money and Information, Cambridge: MA, Harvard University Press.

7 Jiang, It’s time to break up the FAANGs (FB, APPL, AMZN, NFLX, GOOGL) (2019) Feb. 1, Business Insider, https://markets.businessinsider.com/news/stocks/amazon-apple-facebook-netflix-google-break-up-faangs-2019-1-1027916057

8 See Christopher Kuner, Fred H. Cate, Orla Lynskey, Christopher Millard, Nora Ni Loideain, and Dan Jerker B. Svantesson, ‘Expanding the artificial intelligence-data protection debate’ (2018) 8 (4) International Data Privacy Law, 289. For a critical view of Al law, see Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 (2) International Data Privacy Law 76; Sandra Wachter, Brent Mittelstadt, Chris Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR’ (2018) HarvardJL & Tech 1.

9 Marion Oswald ‘Algorithmic-assisted decision-making in the public sector: framing the issues using administrative law rules governing discretionary power’ in ‘The growing ubiquity of algorithms in society: implications, impacts and innovations’ (2018) Philosophical Transactions of the Royal Society A 376 20170359; DOI: 10.1098/rsta.2017.0359

10 19 June 2019, HC Debates Vol.662, Col. 332 http://bit.ly/2ZMeuab

11 Council of Europe CM/Rec(2014)6 Recommendation of the Committee of Ministers to member states on a guide on human rights for Internet users Adopted by the Committee of Ministers on 16 April 2014 at the 1197th meeting of the Ministers’ Deputies. Applied ethics may even lead to a new further Law of Robotics: Ignorantia juris non excusat (“Ignorance of the law is no excuse” – Aristotle).

12 Posing even broader questions about the supplanting of law by technology management: see Roger Brownsword (2016) Technological management and the Rule of Law, Law, Innovation and Technology, 8:1, 100-140, DOI: 10.1080/17579961.2016.1161891

13 Von der Leyen, Ursula (2019) European Commission President-elect: Political guidelines for the next Commission (2019-2024) – “A Union that strives for more: My agenda for Europe”, 16 July 2019, at p.13: https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf.

14 Ben Wagner (2019) Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems, Policy & Internet, Vol. 11, No. 1, 2019, 104-122 at https://onlinelibrary.wiley.com/doi/pdf/10.1002/poi3.198

15 Hasselbalch, Gry (2019) Making sense of data ethics. The powers behind the data ethics debate in European policymaking, Internet Policy Review   8 (2). DOI:10.14763/2019.2.1401. Yochai Bekler, Don’t let industry write the rules for AI (2019) Nature 569, 161 doi:10.1038/d41586-019-01413-1

16 Research ethics topics in ethics washing include regulating personally identifiable data (using GDPR and international equivalents), in which abuse and call for self-regulation via technology including AI has inevitably been described as ‘data washing’.

17 Marsden, C. (2018) “Prosumer Law and Network Platform Regulation: The Long View Towards Creating Offdata” 2 Georgetown Tech. L.R. 2, pp.376-398

18 Regulation (EU) 2016/679 — protection of natural persons with regard to the processing of personal data and the free movement of such data [2016] OJ L 119 at Article 20.

19 Michèle Finck, Smart contracts as a form of solely automated processing under the GDPR, International Data Privacy Law, Volume 9, Issue 2, May 2019, Pages 78–94, https://doi.org/10.1093/idpl/ipz004.

20 Committee on Communications, Regulating in a digital world (HL Parper,9 March 2019, vol 299) 46.

21 “Interoperability” indicates the ability of the digital content or digital service to function with alternate hardware or software; see Art 2 no 12 Proposal for a Directive Of The European Parliament And of The Council on certain aspects concerning contracts for the supply of digital content  and digital services. See further Wolfgang Kerber, Heike Schweitzer, Interoperability in the Digital Economy, 8 (2017) JIPITEC 39 https://www.jipitec.eu/issues/jipitec-8-1-2017/4531

22 UK Competition and Markets Authority, press release of 9 August 2016 “CMA paves the way for Open Banking revolution” < https://www.gov.uk/government/news/cma-paves-the-way-for-open-banking-revolution.

23 Wolfgang Kerber, Heike Schweitzer, Interoperability in the Digital Economy, 8 (2017) JIPITEC 39 para 1. Brown and Marsden (2013) Regulating Code, Cambridge, MA: MIT Press p.39.

24 Urs Gasser (2015) “Interoperability in the Digital Ecosystem” <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2639210> at p14.

25 Brown and Marsden (2013) supra n.32 pp.38,39.

26 Commission decision of 27 June 2017 Case AT.39740 – Google Search (shopping)

27 Alessandro Mantelero, AI and Big Data: A blueprint for a human rights, social and ethical impact assessment, Computer Law & Security Review, Volume 34, Issue 4, 2018, pp.754-772, ISSN 0267-3649, https://doi.org/10.1016/j.clsr.2018.05.017:

“The use of algorithms in modern data processing techniques, as well as data-intensive technological trends, suggests the adoption of a broader view of the data protection impact assessment. This will force data controllers to go beyond the traditional focus on data quality and security, and consider the impact of data processing on fundamental rights and collective social and ethical values. Building on   studies of the collective dimension of data protection, this article sets out to embed this new perspective in an assessment model centred on human rights (Human Rights, Ethical and Social Impact Assessment-HRESIA). This self-assessment model intends to overcome the limitations of the existing assessment models, which are either too closely focused on data processing or have an extent and granularity  that make them too complicated to evaluate the consequences of a given use of data. In terms of architecture, the HRESIA has two main elements: a self-assessment questionnaire and an ad hoc expert committee.

28 ISO/IEC JTC 1/SC 42 Artificial intelligence: https://www.iso.org/committee/6794475.html

29 Ethics Guidelines for Trustworthy Artificial Intelligence (AI) (April 2019) prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG): https://ec.europa.eu/futurium/en/ai-alliance-consultation

30 OECD Principles on Artificial Intelligence promote artificial intelligence (AI) that is innovative and trustworthy and that respects human rights and democratic values. They were adopted on 22  May 2019  by OECD member  countries  when they  approved  the OECD Council  Recommendation on Artificial Intelligence: https://www.oecd.org/going-digital/ai/principles/

31 UK Information Commissioner’s Office, Feedback request: profiling and automated decision-making, 6 April 2017 https://ico.org.uk/media/about-the-ico/consultations/2013894/ico-feedback-request-profiling-and-automated-decisionmaking.pdf

32 Hosanagar (2019) 22 May, Vox, at https://www.vox.com/the-highlight/2019/5/22/18273284/ai-algorithmic-bill-of-rights-accountability-transparency-consent-bias. He collates 10 Steps towards Ethical AI: Transparency; Explainability; Consent; Discrimination; Accountability to Stakeholders; Portability: ; Redress and Appeal; Algorithmic Literacy; Independent oversight; Governance.

33 IEEE (1990) quoted in, Fenareti Lampathaki et al, ‘Infusing Scientific Foundations into Enterprise Interoperability’ (2012) 63(8) Computers in Industry 858, 859.

34 Bob Metcalfe, ‘Metcalfe’s Law after 40 Years of Ethernet’ (2013) 46(12) Computer 26, 28.

35 Facebook Newsroom, A Note From Mark Zuckerberg, (2019) 14 March at https://newsroom.fb.com/news/2019/03/a-note-from-mark-zuckerberg/

36 Smith, Graham (2019) Speech is not a tripping hazard – response to the Online Harms White Paper, at https://www.cyberleagle.com/2019/06/speech-is-not-tripping-hazard-response.html, 28 June: citing Rhodes v OPO [2015] UKSC 32 “The White Paper would place a duty on intermediaries that would most likely result in the suppression, or at least restriction, of material of the kind discussed in Rhodes.” See also Khan, Lina and Pozen, David E., A Skeptical View of Information Fiduciaries (2019) Harvard Law Review, Vol. 133; Columbia Public Law Research Paper No. 14-622. Available at: https://ssrn.com/abstract=3341661

37 Lee, Micah, Battle Of The Secure Messaging Apps: How Signal Beats Whatsapp, The Intercept, June 22 2016 at https://theintercept.com/2016/06/22/battle-of-the-secure-messaging-apps-how-signal-beats-whatsapp/

38 EU Commmission (2019) Margrete Vestager 3 June speech: “Competition and the Digital Economy” https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/competition-and-digital-economy_en

39 ACCC, Digital Platforms Inquiry, Final Report, 26 July 2019 at https://www.accc.gov.au/focus-areas/inquiries/digital-platforms-inquiry/final-report- executive-summary

40 Case T-201/04, Microsoft v Commission, EU:T:2007:289, 1088; Decision 24 May 2004 Case C-3/37792 Microsoft; Decision of 16 December 2009 in Case 39530 Microsoft (Tying)

41 Proposed by Brown/Marsden 2008, supra n.32 at Acknowledgements and p.217.

42 Jacques Crémer, Yves-Alexandre de Montjoye, Heike Schweitzer (2019) Competition Policy for the Digital Age, Report for DG Competition, at Chapter 5: http://ec.europa.eu/competition/publications/reports/kd0419345enn.pdf

43 Kenneth Cukier and Viktor Mayer-Schönberger (2013) Big Data: A Revolution That Will Transform How We Live, Work, and Think, John Murray Publishing.

44 Supra n.15.

45 Digital Competition Expert Panel (2019) Unlocking digital competition, Report of the Digital Competition Expert Panel: “Furman report” at https://www.gov.uk/government/publications/unlocking-digital-competition-report-of-the-digital-competition-expert-panel