The Institute of Work has issued a report on the use of AI in hiring. It says that the use of AI presents risks to equality, potentially embedding bias and discrimination. Auditing tools are often promised as a solution. However, the Institute’s research, which examines tools for auditing AI used in recruitment, finds these tools are often inadequate in ensuring compliance with UK equality law, good governance and best practice.
The Institute of Work uses its report to argue that a more comprehensive approach than technical auditing is needed to safeguard equality in the use of AI for hiring, which shapes access to work. It presents first steps which could be taken to achieve this.
The report covers the following issues:
- The Institute of Work demonstrates the case for evaluating the impact of AI on equality.
- It reviews the different technical, statistical and necessarily narrow approaches to defining bias and fairness in AI which are often used in auditing tools.
- It outlines the inadequacies of these definitions in addressing the risks to equality of machine learning hiring systems, and presents the expectations of UK equality law.
- It reviews existing tools for auditing AI systems in hiring, evaluating the strengths and limitations of each.
- It outlines how technical auditing should fit within a broader process of equality impact assessment.
- Finally, the report identifies directions for future research, policy and legal development, focussing on where the stakes are highest.
The Equality Act 2010 is the main source of non-discrimination law in the UK, covering both direct and indirect discrimination. The report points out that machine learning systems can replicate inequalities that do not conform to narrow definitions of discrimination, and can identify groups with common traits not classified as protected characteristics, making decisions over time which could disadvantage and or exclude them from the labour market at scale.
The report argues that there are sound business, technical, legal and policy reasons why employers should aim to exceed the strict requirements of the law. Exercising caution in this way, and championing good conduct, will reduce the risks of a claim or finding under the Equality Act and support the development of AI systems that promote equality. It will also help establish new norms in best practice and build trust in AI, at a critical time in its development and use. The report suggests use of an equality impact assessment to assess the effect of use of AI and aims to help employers achieve those goals.
One way to achieve these goals is through its proposed Equality Impact Assessment which would support and guide human evaluation of AI systems. For that reason, the EIA focuses on key human decision-making points in the design and deployment of an AI system:
- selection of the AI system;
- selection of the training data sets;
- selection of the outcome; and
- selection of the variables.
Legal codes, from the EHRC and ICO in the UK, should provide detailed guidance on the application of the Equality Act, GDPR, and the Data Protection Act. Guidance and statutory codes from regulators have particular importance when clear interpretation and application is needed to inform design, as well as use. Regulation also needs to be reviewed more widely. Such a review should consider the expectations of the Equality Act and challenges identified in the report.
On top of industry and legal standards, EIAs should be developed across sectors. EIAs should be commenced prior to the deployment of an AHS system, enabling organizations to assess risks and evaluate potential impacts of their system, before it is deployed. Evaluation will then continue, extending to legal compliance and evaluation of actual impacts, and positive steps that can be taken at each key decision-making point. The Institute will launch a consultation on an EIA in late May to coincide with the tenth anniversary of the Equality Act. The aim of the EIA will be to promote equality rather than embed inequality.