The Equality and Human Rights Commission (EHRC) has made tackling discrimination in AI a major strand of its new three-year strategy. There is emerging evidence that bias built into algorithms can lead to less favourable treatment of people with protected characteristics such as race and sex.
As a result, the EHRC has now published guidance aimed at helping organisations avoid breaches of equality law, including the public sector equality duty (PSED). The guidance gives practical examples of how AI systems may be causing discriminatory outcomes.
The EHRC has said that from next month, it will work with a cross-section of around thirty local authorities to understand how they are using AI to deliver essential services, such as benefits payments. This is due to concerns that automate systems are inappropriately flagging certain families as a fraud risk.
The EHRC is also exploring how best to use its powers to examine how organisations are using facial recognition technology, following concerns that the software may be disproportionately affecting people from ethnic minorities.
It hopes that these interventions will improve how organisations use AI and encourage public bodies to take action to address any negative equality and human rights consequences.
The monitoring projects will last several months and will report initial findings early next year.
The guidance provides advice for organisations about how to consider how the public sector equality duty applies to automated processes, to be transparent about how the technology is used and to keep systems under constant review. The guidance is also useful for private sector companies providing services and tech to the public sector.