The ICO is seeking views to help shape and improve its AI and data protection risk mitigation and management toolkit.
The toolkit is designed to help risk practitioners to identify and mitigate the data protection risks AI systems create or exacerbate. It also aims to help developers think about the risks of non-compliance with data protection law.
The toolkit has been designed to reflect the ICO’s internal AI auditing framework and its AI and data protection guidance. In addition, it provides practical support to organisations auditing the compliance of their own AI systems.
The ICO says that it is looking for views from a wide range of organisations across all sizes and sectors to help make this toolkit as relevant as possible. It wants to hear from people in compliance focused roles, as well as people in more technical roles who are responsible for the design, development and maintenance of AI systems that process personal data.
The ICO is releasing the toolkit as an alpha version. A beta version of the toolkit will be published in the summer following initial feedback and further technical development. Beyond that, it will continue to iterate and update, to keep the toolkit relevant and practical.
The call for views ends on 19 April 2021.