It’s not often that the art world intersects with technology
law, but that’s exactly what happened when artist Helen Knowles staged a
performance of The
Trial of Superdebthunterbot at the Zabludowicz Collection in north
London on 26 February.
‘A debt collecting
company, Debt BB buys the student loan book from the government for more than
it is worth, on the condition it can use unconventional means to collect debt.
Debt BB codes an algorithm to ensure fewer loan defaulters by targeting
individuals through the use of big data, placing job adverts on web pages they
frequent. Superdebthunterbot has a “capacity to self-educate, to learn and to
modify its coding sequences independent of human oversight” (Susan
Schullppi, Deadly Algorithms). Five individuals have died as a result of
the algorithm’s actions, by partaking in unregulated medical trials. In the
eyes of the International Ether Court, can the said algorithm be found guilty?’
The algorithm has realised that unregulated and dodgy jobs
generate cash quicker, and steered the vulnerable defaulters towards such jobs.
Debt BB is insolvent and the original programmer has died. The case has been
brought to the International Ether Court under the Algorithm Liability Act,
with Superdebthunterbot standing accused of gross negligence manslaughter.
Participants watched a film of the trial, and then the jury
sat down to deliberate (ably aided by audience contributions). The jury was
comprised of artists, technologists, legal academics, a futurologist and a commercial technology lawyer (me).
Initially, the jury found it difficult to integrate the
premise that an algorithm could be liable for a crime. In the end,
Superdebthunterbot was granted a second chance at life, as there were 5 votes
for guilty and 7 votes for not guilty. However, the discussion brought out a
number of interesting themes:
- The emotional and intellectual difficulties with
applying a human-based code of ethics (the law) to machines. The concept of
negligence appeared to translate fairly well to independent thinking machines,
as the concept of ‘reasonable foreseeability’ is an objective standard, and
doesn’t require analysis of any mental state. However, there was a divide
between the emotional reaction judging the behaviour as morally wrong and the
intellectual desire to impute such behaviour to a rational agent. - The purpose of punishment, which is a live and
controversial debate within human society. The jury was only asked to establish
the algorithm’s liability, as sentencing would be left to the judge, but what
would be the point of punishing a machine? How would any potential ‘Algorithm
Liability Act’ approach the competing strands of punishment: rehabilitation and
prevention, retribution, restorative justice (ie helping victims overcome the
crime) or even redemption? - The difficulty in differentiating between an
algorithm, as a piece of code, and its physical implementation in a machine or
network. It would have been much easier to find the Superdebthunterbot
algorithm liable if it was embodied in a humanoid robot, but it’s much more
difficult to do that when the algorithm operates across a network of disparate
machines operated by third parties. - Regulation was a recurring theme. What would
this involve? How do we move beyond and improve upon Asimov’s laws? How do we
ensure compliance, once the human owners or creators are dead or insolvent? How
can regulators keep up with an increasingly complex area of technology? How can
the public have meaningful oversight and understanding of the algorithms and
the regulators? - If Artificial Intelligence can be legally
responsible for its actions, is a sufficient level of reflexivity or
self-understanding required? Jurors drew parallels between legal responsibility
for children, where at a certain age individuals are deemed by the law to be
responsible for their actions. How would any such maturity level for an
algorithm be defined? - This scenario was not far removed from reality
today. It was acknowledged by the jury that this was already happening,
although in a less visible way. The value of this piece of art was to make
visible and crystallise issues that are already out there. Is the Artificial
Intelligence the problem, or is the real issue the conversion of humans into
data and then the paternalistic manipulation of the humans in a technical and
organisational process?
Despite their differences, there was one thing that every
single juror agreed upon: any liability for the Artificial Intelligence must
not in any way let the human owners, operators and creators off the hook: a
reminder that we are all responsible for the future. Has the weight of freedom
ever been so great?
Michael Butterworth is an associate in the commercial
technology team at Kemp Little LLP