The Ada Lovelace Institute has issued a new report called Examining the Black Box: Tools for Assessing Algorithmic Systems. It clarifies the terms around algorithmic audits and impact assessments, and the current state of research and practice. As algorithmic systems become more critical to decision making across many parts of society, there is increasing interest in how they can be scrutinised and assessed for societal impact, and regulatory and normative compliance.
The Institute has identified four prominent approaches to assessing algorithms that are often referred to by just two terms: algorithm audit and algorithmic impact assessment, which are dealt with in more detail below. However, there is not always agreement on what these terms mean among different communities; there are different interpretations and frames of reference.
While there is broad enthusiasm among policymakers for algorithm audits and impact assessments, there is often lack of detail about the approaches being discussed. This stems both from the confusion of terms, but also from the different maturity of the approaches the terms describe. Therefore, having clarity about what approach is being referred to will help policymakers and practitioners to build evidence and methodology to take these approaches forward.
The Institute focuses on algorithm audit and algorithmic impact assessment. For each, it identifies two key approaches the terms can be interpreted as:
Algorithm audit
- Bias audit: a targeted, non-comprehensive approach focused on assessing algorithmic systems for bias
- Regulatory inspection: a broad approach, focused on an algorithmic system’s compliance with regulation or norms, necessitating a number of different tools and methods; typically performed by regulators or auditing professionals
Algorithmic impact assessment
- Algorithmic risk assessment: assessing possible societal impacts of an algorithmic system before the system is in use (with ongoing monitoring often advised)
- Algorithmic impact evaluation: assessing possible societal impacts of an algorithmic system on the users or population it affects after it is in use
Further research and practice priorities
The Institute’s report looks at the state of research and practice in each approach and makes a series of recommendations. There is scope for a range of important work across sectors and approaches.
In addition, the Ada Lovelace Institute plans to host workshops examining looking at regulatory inspection of algorithms with cross-disciplinary groups in three domains:
- digital media platforms
- pricing and competition; and
- equalities.
These workshops aim not only to further the conversation in the respective sectors, but also to identify shared needs, methodologies, challenges and solutions for the regulatory inspection of algorithmic systems across sectors.