Justice UK has published a report about AI in the justice system.
It says that the justice system plays a vital role in peoples’ lives and our democracy. However, it suffers from several problems including court delays running to years, many people with legal problems not being able to access necessary legal advice, and prisons being overcrowded.
The UK government intends to use AI to ‘revolutionise’ public services, and AI is already shaping the justice system through police surveillance, legal research, and advice bots. However, the report says that AI is not a cure-all, and it carries big risks. Cases like the Post Office Horizon scandal and the Dutch child benefits scandal (where thousands were falsely accused of fraud due to a discriminatory algorithm) show the serious harms technology can enable. The UK justice system also has more data gaps than any other public service, creating extra challenges for responsible AI use. Attempts to improve the system through reforms and innovations, should have the rule of law and human rights embedded in their strategy, policy, design and development.
The report sets out a rights-based approach to draw on concrete, well-understood, and enforceable legal rights. It proposes two key requirements:
Goal-led
Have a clear objective of improving one or more of the core fundamental goals of a well-functioning justice system, which includes:
- Equal and effective access to justice
- Fair and lawful decision-making
- Openness to scrutiny
Being goal-led aims to ensure that innovations are targeted at genuine use cases which can help deliver better outcomes.
Duty to act responsibly
All those involved in the deployment of AI within the justice system have a responsibility during the design, development and deployment of AI to ensure that the core features of the rule of law and human rights are embedded at each stage.
This should include identifying risks and interrogating their impact, to prevent future harms. Furthermore, there should be an obligation to pause, rethink, redesign or even stop development or deployment if significant risks to the rule of law or human rights are identified. The degree of expertise and understanding of human rights and the rule of law will naturally differ across the ‘supply chain’ of AI. Some, such as the Ministry of Justice, will have in-depth experience and knowledge, whereas others, such as those in the tech field, may have limited experience. It is the responsibility of those with greater knowledge and understanding of human rights and the rule of law to ensure that they clearly set expectations and boundaries for the less experienced. However, the report emphasises that this is not to let those without a legal background (for example in the tech industry) off the hook. Each person involved in the ‘supply chain’ must act responsibly, and in doing so this results in a stronger overall outcome.
The framework is deliberately simple. The report says “clarity of focus on the purpose of the justice system allows for a clear line of sight between the many potential uses of AI and those which are genuinely in service of a justice system that upholds the rule of law and protects human rights. This report is for all those who have the power to shape our justice system…who may be considering innovation with AI. It also serves a second purpose for wider society, who may use it to scrutinise how others are innovating with AI in the justice system.”
As a macro framework it applies equally to all areas of the justice systems – from criminal, to civil, to corporate litigation through to family. However, the report’s authors point out that in each of these areas the detailed considerations will be different, and the methods by which risks are managed will need specific approaches. They intend to further develop the practical application of this framework for each specific area and welcome contributions to inform the next stage of work.
The report sets out the following matrix for asking key questions when developing a tool.
