After a stream of technology-related announcements in recent weeks, the UK government has now published its White Paper on AI regulation. The UK Science and Technology Framework sets out government’s strategic vision and identifies AI as one of five critical technologies.
The proposals set out have been informed by the feedback received as part of its call for views on its 2022 policy paper. Its plan was to be much more hands-off than the strict EU AI Regulation with a single regulator, and follow the OECD six principles.
The government has made an initial assessment of AI-specific risks and their potential to cause harm, such as safety, security, fairness, privacy and agency, human rights, societal well-being and prosperity.
The government says that it will “avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI”. Instead of giving responsibility for AI governance to a new single regulator, existing regulators, such as the HSE, EHRC and CMA, will be tasked with coming up with tailored, context-specific approaches that suit the way AI is being used in their sectors. It will create an AI framework around four key elements:
- Defining AI based on its unique characteristics to support regulator coordination.
- Adopting a context-specific approach.
- Providing a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities.
- Delivering new central functions to support regulators to deliver the AI regulatory framework, maximising the benefits of an iterative approach and ensuring that the framework is coherent.
The White Paper outlines the five principles that the regulators should consider to best facilitate the safe and innovative use of AI in the industries they monitor. The principles are:
- safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed;
- transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI;
- fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes;
- accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes; and
- contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI.
Over the next 12 months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors. When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently.
£2 million will fund a new sandbox, a trial environment where businesses can test how regulation could be applied to AI products and services, to support innovators bringing new ideas to market without being blocked by rulebook barriers.
The consultation ends on 21 June 2023. The government’s announcement comes against the backdrop of the House of Common’s Science and Technology Select Committee inquiry on governance of AI in the UK, which has yet to report. It remains to be seen whether a more flexible approach is workable in the UK when organisations working cross-border will have to work with the EU’s strict approach.