On 15 October the Government published the findings of its
independent review ‘Growing
the artificial intelligence industry in the UK‘. The report sets out an
ambition for the UK to become the best place in the world for AI businesses to
set up, grow and thrive. Although it’s hard to disagree with the thrust of the
report, it leaves a number of key questions unanswered and triggers quite a few
more.
Recommendations
The report sets out a number of recommendations. Businesses
who provide or use AI will certainly welcome a number of the recommendations,
particularly those designed to increase the UK AI talent pool (by increasing
education, training and diversity in AI). Businesses will also be relieved to
see that the report doesn’t recommend the introduction of new regulation on AI.
Instead, the report recommends, amongst other things, that the Government
establish a UK AI council, a data privacy framework and the development of
‘data trusts’.
AI Council. The
report envisages that the AI council would act as a strategic oversight body
for AI, establishing a forum for coordination and collaboration between
industry, the public sector and academia to discuss issues such as fairness,
transparency and accountability. Although such a forum could be useful, I have
concerns that this envisages a one-size fits all approach. Given the variety of
potential AI applications, it is difficult to see how one leadership body would
be appropriate for all applications. Even if a cross-sector forum is set up, I
would expect relevant industry stakeholders and regulators to work together to
consider the implications of AI at a sector level (as we have seen to date
with, for example, the insurance industry in the context of autonomous
vehicles).
Data Privacy
Framework. The report does not discuss the key issue of data privacy in
great detail. However, it does acknowledge the potential legal constraints on
AI imposed by compliance with the GDPR. It recommends that the ICO and Alan
Turing Institute develop a framework for explaining processes, services and
decisions delivered by AI, to improve transparency and accountability,
including guidance on how to explain decisions and processes enabled by AI.
Again, although this recommendation is to be broadly welcomed, a general
framework for AI applications provided by the ICO may not sufficiently address
the challenges triggered by different AI use cases and, of course, would not
cover non-personal data use cases where decision-making transparency may still
be required.
Data Trusts. To
increase the sharing of data for the purposes of AI, the report recommends the
development of ‘data trusts’ which parties would form to improve trust and ease
around data sharing. These trusts would be a ‘set of relationships underpinned
by a repeatable framework, compliant with parties’ obligations to share data in
a fair, safe and equitable way’. The report also suggests that a support
organisation, the Data Trusts Support Organisation (DTSO) could be developed
which would lead on the development of tools, templates and guidance on data
sharing so that data owners and consumers can come together to form trusts. It
envisages that the DTSO would act as a ‘trustee’, a third party that helps
manage a data trust. The report also suggests that the DTSO would act as a
trusted advisor on GDPR.
It is really not clear exactly what the report authors have
in mind here; for example, what is a data trust other than a contractual
agreement between two parties, what would the DTSO actually do in terms of
managing a data trust other than provide guidance and templates and why is the
DTSO encroaching into the ICO’s remit in terms of GDPR guidance? The reference
to ‘trusts’ and ‘trustees’ in this context is potentially confusing, given that
the UK has a whole body of trust law.
In terms of public sector sharing of data, having a
framework and templates could be helpful, to ensure consistency and good
practice across government agencies. In the private sector, it is harder to see
how creating a standard template for data sharing would be possible or useful.
Although there will be some common issues that arise in data sharing
arrangements, many issues will depend on who the data sharing parties are, what
they want to do with the data and the commercial agreement reached between the
parties. Different arrangements would require different contractual terms. For
example, the sharing of personal data between organisations in the pharma
sector is likely to have very different implications from the sharing of
non-personal sensor data between stakeholders in the oil and gas industry.
Also, competition concerns may arise in certain circumstances (data sharing
between a group of larger industry players), but not in others. Moreover, the
contractual agreement between the relevant parties would govern the relationship,
so it is not clear what role the DTSO would play once the agreement is signed.
Further clarity on all of these issues would be welcome.
What the report
doesn’t cover
Legal constraints.
Given the authors are technologists, not lawyers, it is perhaps not surprising
that the report doesn’t delve deeply into the really difficult tensions which
arise between AI and law and regulation, including the questions of privacy,
security, accountability, transparency, control, risk allocation and liability.
I agree that general regulation of the technology wouldn’t be useful,
particularly at this early stage. However, as AI and its use becomes more
widespread, a risk-based approach to regulation focused on particular
applications of the technology (rather than the technology itself) may be
needed. In the meantime, I believe that staged, considered intervention,
consisting of responsible self-regulation and standard setting is the best
approach.
Ethical issues. The
report makes clear that tackling ethical and social questions is beyond the
scope and expertise of the industry-focused review. However, there is a role
for government to play here and the House of Lords Select Committee’s current
review into AI is looking at these issues. The development of ethical
frameworks and principles would be helpful for business and lawyers can make an
important contribution here.
Global cooperation.
The report doesn’t really address the need for global cooperation and
engagement. As Baker McKenzie pointed out in our recent submission to the Select Committee on Artificial
Intelligence, we believe that given the cross-border reach of AI, international
and regulatory harmonisation are crucial. Our own research shows that many global corporations agree. We
believe that it’s important for the UK to lead and engage in dialogue with
other nations to encourage international initiatives, knowledge sharing and the
implementation of best practices.
I welcome the ambition to make the UK a world leader in AI.
But given Brexit and other key challenges currently facing the Government,
whether the UK achieves this ambition will depend ultimately on whether the
Government is willing to dedicate the time, money and commitment needed to make
this a reality
Sue McLean is a Technology and Fintech Partner at Baker McKenzie LLP
This article is an edited version of a Baker McKenzie client
alert.