Facial recognition technology is among the most controversial data-driven innovations currently in use. The Centre for Data Ethics and Innovation has published a paper which seeks to answer several fundamental questions about facial recognition technology systems, including how they are developed, where they have been deployed to date, and the mechanisms by which they are governed.
The CDEI’s findings
Facial recognition technology can be used for varied purposes. Some systems aim to verify people’s identity (for example to unlock an electronic device), while others seek to identify individuals (such as scanning a group of people to see if any are featured on a watchlist).
Facial recognition technology systems have been deployed across the public and private sectors. Several police forces have trialled live facial recognition technology, while banks have installed facial recognition technology functionality within apps with the aim of improving the customer experience.
The report states that used responsibly, facial recognition technology has the potential to enhance efficiency and security across many contexts. However, the technology also presents several risks, including to privacy and the fair treatment of individuals.
The extent to which an facial recognition technology system is beneficial or detrimental to society depends on the context, as well as the accuracy and biases of the specific algorithm deployed. Each use must be assessed according to its own merits and risks.
The use of facial recognition technology is regulated by several laws, including the Data Protection Act 2018 and the Human Rights Act (for public sector applications). However, a standalone code of practice for facial recognition technology has yet to be drawn up. Effective governance of facial recognition technology involves not just laws but also industry agreeing common standards and mitigating hazards of its own accord.
The regulatory regime governing the use of facial recognition technology in the private sector is less extensive than the one for public law enforcement. Policymakers should consider whether there is sufficient oversight of facial recognition technology in contexts such as retail and private property developments. The ICO is currently investigating whether the use of a facial identification system by a property developer in Kings Cross for security purposes breached data protection legislation. A central concern is that the surveillance occurred without the knowledge of people walking through the vicinity, and the investigation will, amongst considerations such as fairness and transparency, look at whether this deployment was necessary to achieve a substantial public interest and whether any criminal offence data was used.
Next steps
The CDEI will continue to examine the effects of facial recognition technology on society. It is particularly interested in how the technology is being used in the private sector, and where it might be deployed to support Covid 19 response efforts, for example to power the digital ID systems behind Covid 19 health certificates.