On 25th anniversary of the Universal Declaration of Human Rights the Society for Computers and Law was established. Human rights and human rights law over the following 25 years has often taken second place to ethics in the debates about regulation of technology. Now however, there is a growing recognition that regulation of artificial intelligence and other forms of technologies should be firmly rooted in the existing human rights framework and neither undermine nor replace existing human rights standards. Human rights conventions, as living breathing instruments were crafted to grow and respond to the developments of societies and their new challenges. This article looks at why the human rights framework is well suited to the governance of risks to humans from technology and artificial intelligence. I will draw on three specific examples to illustrate how harms have been assessed by the European Court of Human Rights (ECtHR) and conclude with a roundup of some principles to be applied to human rights risk assessment.
Why human rights and why now?
Technology crosses international geographical boundaries and requires regulating in a way that can be understood and applied across those boundaries. Existing international and European human rights standards were internationally agreed in the 1940s and 50s. They consist of a well-established framework that is enforceable, there is a wealth of existing jurisprudence and the underlying principles conveniently map on to the lifecycle of a product.
Figure: the human rights framework of prevention, monitoring and oversight and effective remedies, mapped to the lifecycle of a product as described by McGregor, L. Murray, D and Ng, V (diagram adapted from https://respository.essex.ac.uk/24505/1/div-class-title-international-human-rights-law-as-a-framework-for-algorithmic-accountability-div.pdf)
The human rights framework has a tried and tested ability to enable a fair balance to be struck between competing interests – for example the right to privacy versus freedom of expression, innovation versus individual rights.
Under the international systems, the State is the primary duty bearer with the obligation to prevent harm and protect human rights. In this sense, a failure to appropriately regulate could result in the state breaching the operational duty to protect where it is known or ought to be known that human1 rights are at risk. In business and corporate arenas, the UN Guiding principles on Business and Human Rights introduced a corporate responsibility both to respect human rights by not infringing on the rights of others and to address adverse impacts on human rights related to their activities.
In the UK itself, human rights are protected by domestic law which is now interpreted and developed in accordance with international human rights law and the Human Rights Act 1998 which gave effect to the ECHR. Notwithstanding recent attacks on the HRA 1998 and the Strasbourg court, the UK’s underlying constitutional and common law adherence to human rights standards predates the adoption of the HRA 1998 by centuries.
Why now?
The UN have consistently called for human rights to be front and centre of technology regulation. This requires listening to affected communities, assessing the human rights risk before, during and after their use, implementing a robust legal framework and resisting the temptation for self regulation by industry as this has been shown to be ineffective. The ‘Summit of the Future’ in 2024 will discuss the need to create a Global Digital Compact for an open, free and secure digital future for all.
The Council of Europe is working on the Draft Framework Convention on AI, Human Rights, Democracy and the Rule of Law. The Convention, if adopted, will impose general obligations to respect human rights and freedoms, maintain integrity of democratic processes and respect for the rule of law. It takes a risk based approach in line with the EU AI Act and requires risk assessment and risk management to mitigate human rights risks.
At the time of writing, political agreement has been reached in respect of the EU AI Act which will require those who deploy high-risk AI systems to conduct fundamental rights impact assessments and notify relevant national authorities in Europe of the result. Although no longer applicable in the UK companies that operate in Europe will be required to adhere to this.
In the UK itself, there are multiple new duties of care and duties to conduct risk assessments contained in the Online Safety Act 2023 and the data protection legislation (both existing and forthcoming). Companies are also required to comply with the Age-Appropriate Design Code where online services are likely to be accessed by children.
Regulation raises complex questions relating to who has rights and who is responsible for harm. One significant issue is whether, and to what extent, individuals have a right to know that technology is being used at all in decisions affecting human rights including what that technology is and how that technology works. Often commercial sensitivity is cited as a reason for refusing to disclose this information. Further, an obvious current loophole in automated decision making is that where a human is ‘in the loop’ many of the current existing legal protections are removed. Finally, where human rights should be litigated may also raise complex issues given the transnational nature of technology.
Jurisdiction
Article 1 of the European Convention on Human Rights provides ‘the High Contracting Parties shall secure to everyone within their jurisdiction the rights and freedoms defined in Section I of this Convention’. The obvious first question when seeking to establish and enforce rights, given that technology does not always respect geographical boundaries is – who is ‘within the jurisdiction’?
As an exception to the principle of territoriality, the ECHR has recognised that acts of states parties performed or producing effects outside their territories can constitute an exercise of jurisdiction within the meaning of article 1 (see e.g. H. F. and ors v France)2. Helpfully the ECtHR has very recently considered this in the context of electronic communications in Wieder and Guarnieri v The United Kingdom3 (although only at first instance, this case is likely to be of significant wider importance as it is the first case in this field).
The case concerns the bulk interception of communications by the UK intelligence agencies pursuant to section 8(4) of the Regulation of Investigatory Powers Act 2000 and the receipt by the United Kingdom of material intercepted by foreign counterparts. Following the ECtHR decision in Big Brother Watch and ors v United Kingdom4 the complainants submitted complaints to the Investigatory Powers Tribunal (IPT) to discover whether the UK intelligence agencies had unlawfully obtained their information. The IPT refused to investigate on the grounds that the complainants lived outside the UK which would have left the complainants with no remedy. Before the ECtHR, the UK Government asserted that interception of communications by a contracting state did not fall within that State’s jurisdictional competence for the purposes of article 1 of the ECHR when the sender or recipient complaining of a breach of their article 8 rights (privacy and correspondence) was outside the territory. The Government sought to argue that the interference happened to the individual and therefore took place where the individual was located. The ECtHR disagreed and held that ‘interference with privacy of communications clearly takes place where those communications are intercepted, searched, examined and used and the resulting injury to privacy rights of sender and/or recipient will also take place there’ (para 93); therefore any breach of the right to privacy fell within the territorial jurisdiction of the UK. This has potentially significant wider implications. For example, if child sex abuse material was created abroad but examined and used in the United Kingdom, would this engage the duty of the state to protect the victim and to what extent?
The Court also considered of its own motion, and left open, an interesting evidential question about how an individual can establish victim status when they do not know through which states their online communications have passed (paras 96-100 refer).
Right to life and recommender algorithms?
Recommender and content moderation algorithms automate what people see or don’t see, what stays online and what gets promoted. Recommender algorithms are great if you want to find a pair of trousers, Christmas presents or a decent restaurant but have also been implicated in genocide, distortion and disruption of democracy, child mental health harm and suicide and other harms and often discriminate against persons with protected characteristics in terms of access to information and services.
In 2022, an inquest5 found that Molly Russell, a 14 year old, died from ‘an act of self-harm whilst suffering from depression and the negative effects of online content.’ The recommendation of particularly graphic content both romanticising and portraying self-harm and suicide as an inevitable consequence of depression was found to have contributed to the death of Molly Russell, a young teenager who committed suicide in 2017, in a ‘more than minimal way’, In the Prevention of Future Deaths report, the coroner recommended legislation from government and self-regulation by platforms.6
Article 2 of the ECHR protects the right to life. The State has a duty to provide effective protection which consists of both an operational duty to take action against a real and immediate risk to life in certain circumstances and to conduct an effective investigation following a death. The UK has enacted the Online Safety Act 2023 with the intention of better protecting children from the harms of online services and to enable relevant evidence to be obtained in the event of a death. Ofcom’s consultation on illegal harms duties closes in February 2024 and it will be consulting on draft codes of practice in respect of children in 2024 with guidance expected by Spring 2025.7 Whilst the clear steps to regulate online harms are welcome, there are significant concerns that the Act itself will lead to significant infringements of human rights including the rights to privacy, freedom of expression and association. These concerns persist despite assurances given during the passage of the bill, for example, that requirements to introduce child sex abuse material detection will not be implemented until the technology allows this to be done in a manner that protects privacy.
Generative AI, image based abuse and deepfakes
Images based rights abuses, discrimination and gender-based harm can be exacerbated by foundation models; generative AI has consistently been found to replicate and perpetuate gender and race inequality and stereotypes. Digital manipulation of images, sound and video impinge on the privacy and data rights protected under Article 8 of the ECHR. However, whilst images such as the ‘Pope in a puffa’ or a sci-version of your own image might be harmless fun, the reality is that deepfakes disproportionately harm women and girls. 90% of an estimated 85,000 deepfakes known to be circulating online in 2021 depicted non-consensual pornographic images of women. Used to harass and abuse, deepfakes are degrading, humiliating and discriminatory.
A failure to protect in these circumstances potentially engages article 3, 8 and 14 of the ECHR. In Buturuga v. Romania8, the ECtHR recognised that cyberbullying is an aspect of violence against women and girls and that it can take on a variety of forms including cyber breaches of privacy, intrusion into the victim’s computer and the capture, sharing and manipulation of data and images including private data. The Romanian authorities had considered the cyber elements of a court of conduct against women by her husband as a ‘data protection’ or civil issue rather than part of a course of conduct related to violence against women. The failure by the Romanian authorities to investigate cyber bullying from the perspective of domestic violence resulted in a failure to comply with positive obligations inherent in article 3 to protect the victim. The Court held that there had been a violation of articles 3 and 8.
GREVIO’s General Recommendation No. 1 on the Digital Dimensions of Violence Against Women and Domestic Violence9 requires States Parties to the Istanbul Convention on combatting violence against women and domestic violence to recognise the digital dimensions of violence against women as a form of gender-based violence through taking necessary legislative and other measures to prevent and protect women and girls.
Tech facilitated abuse is included in the Domestic Abuse: Statutory Guidance, July 2022.10 Additionally, the Online Safety Act 2023 seeks to remedy some of the protection gaps by imposing a duty of care on providers of regulated search services and end to end user services to prevent and remove illegal content including revenge pornography. Ofcom will consult on draft codes of practice in respect of pornography and the protection of women and girls in 2024, with guidance on gender-based harms expected by Spring 2025.11 The devil will be in both the detail and the implementation, in particular by police forces, and many commentators believe that the current guidance is insufficient and a protection gap remains.
Surveillance technologies
In R(on the application of Bridges) v Chief Constable of South Wales12 the Court of Appeal considered in the form of live facial recognition technology and found that there had been deficiencies in its deployment both in terms of article 8 ECHR and the Public Sector Equality Duty in the Equality Act 2010. Subsequent to that decision, the ECtHR ruled in Glukhin v Russia13 the use of facial recognition technology to identify, locate and arrest a peaceful protestor was in breach of articles 8 (right to privacy) and 10 (freedom of expression) of the ECHR and was capable of having a chilling effect on the rights to freedom of expression and assembly. The court held that in implementing facial recognition technology there is a need for detailed rules governing the scope and application of measures and strong safeguards against the risk of abuse and arbitrariness. The judgement is not without some difficulties. As noted by commentators it leaves unanswered two questions in particular: i) how to define the notion of publicly available data that can be used for facial recognition analysis and ii) when does the general public interest or nati9onal security test justify the use of facial recognition technology.
Discrimination
Artificial intelligence and automated decision making has exposed the existence and, to a degree, the extent of discrimination that currently exists in society across all forms of decision making. Predictive decision making tends to entrench pre-existing discrimination because it is embedded within the training data or the questions asked of that data by humans. This is the case even if the output of a predictive algorithm is also more accurate than human decision making. This is not a new problem.
In 1986 the Commission for Racial Equality were informed by two senior lecturers at St George’s Hospital Medical School in London that a computer programme developed in the 1970s and used in the screening of applicants for places at the school unfairly discriminated against women and people with non-European sounding names. The programme was giving a 90-95% correlation with the grading of the selection panel and was ‘not introducing new bias but merely reflecting that already in the system’. However, the consequence of learning from previous selection panels’ decision resulted in the algorithm being weighted against women and those from racial minorities.
By way of more current examples discrimination in predictive decision making has been identified in almost every area where it has been introduced. From predictive policing (HART), sentencing decisions (COMPAS), visa triaging algorithms (UK Home Office), employment and recruitment software, the A-Level exam results fiasco, welfare benefits (SYRI in the Netherlands), facial recognition (Gender Shades), access to services and housing allocations. The extent of the discrimination is to such an extent that Wendy Hui Kyong Chun argues in ‘Discriminating Data’ that methods used within big data and machine learning are specifically designed to group ‘like’ people together thus encoding segregation, eugenics and identity politics through their assumptions and conditions.’ Given this, the potential for unlawful discrimination in algorithmic decision making is significant.
Is it all doom and gloom?
Robust risk assessments of both the potential for a rights breach occurring, the level of harm likely to result and any mitigation methods available throughout the product development, deployment and decommissioning stages should reduce the likelihood of human rights breaches occurring. In many cases, thin