For the past few years, the UK Government has grown to recognise that the increasing prevalence of AI within the public and private sector has led to an inescapable impact on the UK and its citizens. In many cases, AI has been met favourably, acknowledging the fact that it offers a number of helpful uses and opportunities, such as identifying criminal financial behaviour or tax avoidance. However, as with many technologies, it is very much a double-edged sword. Incorporation of AI and technology into everyday life, particularly in sensitive areas such as healthcare, has also led to public scepticism and distrust.
This is not without reason. A cursory search for AI in its earlier application within these sectors flags several issues: inappropriate data use, biased algorithms, and inaccurate outputs. To address these earlier, but my no means irrelevant concerns, the Government has made a strong push towards developing a digital environment based on trust and transparency. A notable example of this can be found in the creation of a roadmap and similar initiatives targeted at building an effective AI assurance ecosystem.
More recently, the Government has focused on shaping AI standards. In January 2022, the Government announced just such a pilot, in partnership with the British Standards Institution, the Alan Turing Institute, and the National Physical Laboratory. Following this announcement, the Government released a statement that, in partnership with the National Health Service, they would commence yet another AI-focused initiative, this time in relation to accountability and impact assessments within AI and the health sector.
The announcement stems from earlier discussions between the NHS (and their NHS AI lab) and the Ada Lovelace Institute, directed at creating a framework for assessing the impact of medical AI. In this pilot, the NHS will act as the first healthcare body to trial algorithmic impact assessments (“AIA”) within their organisation. The primary purpose is to act as a means of tackling health inequalities and biases in systems underpinning health and care services, thereby removing some of the surrounding distrust these systems have within the healthcare sector.
What exactly are algorithmic impact assessments?
Best described by the Institute, AIAs are a “tool used for assessing possible societal impacts of an AI system before the system is in use”.1 Their purpose, among other things, is to create greater accountability and transparency for the deployment of AI systems.2 In this way it is hoped they will build trust in AI by mitigating the potential for harm to specific categories of persons.3
In many ways AIAs are similar to the commonplace impact assessment tools in existence today. A prime example of this is the data protection impact assessment, which evaluates and works to minimise the impact that data processing technologies and policies would have on a person’s privacy rights. In similar fashion, an AIA allows organisations to conduct an impact assessment on the potential risks and outcomes that may be produced when utilising the data the AI is fed, whether it be non-sensitive such as hospital admission rates, or more sensitive such as gender, ethnicity, or family history of illnesses.
By recognising the potential risks caused by the incorporation of certain AI programmes, organisations may then alter their system at early stages of development, and prior to wider implementation.
Why is this such a significant step?
The piloting of AIAs in a setting such as the NHS is a significant step because they are not extensively used in either the public or private sector. As noted above, the pilot acts as the first instance that a public healthcare body has sought to incorporate them within their organisation. Before now, there has been little coherence or uniformity in approach and no guarantee that they produce the intended outcome. Equally there has been no guarantee that these AIAs are effective in reducing risks of bias or inadvertent harm to those who own the data being processed. This pilot, therefore, is an opportunity to test the framework created by the Institute, which may then be used to alter their master proposal moving forwards.
Although a novelty in the healthcare service, it should be noted that approved AIA models are already in existence and used within other contexts. In 2020, the Treasury Board of Canada Secretariat’s Directive on Automated Decision-Making implemented a standard form, aimed at assisting Canadian civil servants managing public sector AI standardisation. Alongside these emerging rigid assessment tools, a rise in the creation of softer frameworks for assessment has also begun, such as the IEEE’s AI Standards or the UN Guiding Principles on Business and Human Rights, which are to be used alongside an organisation’s existing code of ethics.
The implementation of AIAs within the NHS therefore offers an invaluable opportunity to further determine their efficacy and to fill the gap in knowledge and data currently slowing their use. Should this pilot be successful, it is likely that further pilots within other areas of the public and private sector will develop.
The NHS Pilot
The NHS are set to trial this assessment across several initiatives and will also use it as part of the data access process for both the National COVID-19 Chest Imaging Database and the National Medical Imaging Platform.
The objective is to support researchers and developers in assessing possible risks and biases of AI systems when dealing with patient data and members of the public before they can access these resources. As noted in their announcement, while artificial intelligence has the potential to support health and care workers in delivering better care, it may also exacerbate existing health inequalities if certain biases are not properly considered. For example, the Institute notes that AI systems have been less effective at diagnosing skin cancer in persons of colour, largely attributed to training biases and the lack of data available to them. By involving developers and impact assessments at an early stage, patients and healthcare professionals can become involved sooner in the use and development of the medically orientated AI, reducing instances of polluted or biased data and improving patient outcomes.
The announcement goes on to note that this pilot complements the ongoing work of the ethics team within the NHS AI lab in ensuring that training data and testing of systems provide outcomes reflective of diversity and inclusivity, thereby creating a far more useful set of training data and an overall increase in public trust.
Breaking ground: a pioneering framework for assessing the impact of medical AI
AI in healthcare (and even more widely with the public sphere) will not be successfully leveraged unless the public are confident that their health data will be used in an ethical manner, assigned its true value, and used for the greater benefit of UK healthcare. This is a point that has been highlighted best by Lord Clement-Jones in a number of his discussions on the pending Health and Care Bill. While the pilot will not be the final step in achieving this goal, it is certainly a positive step in building trust that AI can perform to the benefit of patients and practitioners.
Although this particular pilot of the framework is to be carried out by the NHS, the Institute notes that their proposal has been developed to assist software developers, researchers, and policymakers in their creation and implementation across a number of healthcare sectors. One such area that would benefit from the implementation of these protocols is medical devices. The use of AI within sophisticated surgical machinery, testing equipment and diagnosis tools, offers unparalleled potential in the provision of accurate and speedy healthcare. Such devices do, however, suffer from the same scepticism and distrust that technology faces within a service that requires a human touch. The use of AIA pilots in medical device procedures may well increase support in their use and allow members of the public to see that their data and care are being handled properly.
It should be noted as well that the wide applicability of the Institute’s framework, its use does not stop at healthcare. It therefore serves as a useful resource for anyone seeking to create AIAs for implementation throughout the design and incorporation stages of AI of their sector.
———–
Sources
[1] Ada Lovelace Institute and DataKindUK. (2020). Examining the Black Box: tools for assessing algorithmic systems. Available at: https://www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems
[2] Knowles, B. and Richards, J. (2021). ‘The sanction of authority: promoting public trust in AI’. Computers and Society. Available at: https://arxiv.org/abs/2102.04221
[3] Raji, D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D. and Barnes, P. (2020). ‘Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing’. Conference on Fairness, Accountability, and Transparency, pp.33–44. Barcelona: ACM. Available at: https://doi.org/10.1145/3351095.3372873
———–