AI and DP

March 21, 2017

Even those with the keenest interest in AI and its
interaction with data protection may have struggled to find time in a busy
schedule to read all of the ICO’s latest version of its paper on ‘Big data,
artificial intelligence, machine learning and data protection’. But if you have
the time, it’s good and the 113-page pdf is here.
Or, bearing in mind that it is essentially a discussion paper, you could take a
tip from me, following my noble self-sacrifice in reading (most of) it, and
just read pp 90 to 98 and save the very useful Annex on ‘Privacy impact
assessments for big data analytics’ for a later date.

The bit that interested me most was on Algorithmic transparency
(pp 86 to 89). You’ll notice that the section in question is just three and a
bit pages out of 113 and that rather reflects the fact that the ICO is more at
home with Big Data than with AI. This relatively brief treatment probably reflects
the view expressed by Jo Pedder, the ICO’s Interim Head of Policy and
Engagement, in a blog
post introducing the paper
that:

‘whilst the means by which the processing of personal data
are changing, the underlying issues remain the same. Are people being treated
fairly? Are decisions accurate and free from bias? Is there a legal basis for
the processing? These are issues that the ICO has been addressing for many
years, through oversight of existing European data protection legislation.’

Up to a point, Lord Copper – up to a point. I think AI in
the wild makes life a lot more complicated than that. There are areas where the
old answers won’t work

The ICO paper’s own brief summary of its suggestions on
algorithmic transparency are as follows:

  • Auditing techniques can be used to identify the
    factors that influence an algorithmic decision.
  • Interactive visualisation systems can help
    individuals to understand why a recommendation was made and give them control
    over future recommendations.
  • Ethics boards can be used to help shape and improve
    the transparency of the development of machine learning algorithms.
  • A combination of technical and organisational
    approaches to algorithmic transparency should be used.

It is hard to disagree with the usefulness of each of these
suggestions but it feels like a flimsy fence for the AI monsters that might
face us. Plus I feel that organisations that have an ‘ethics board’ risk
outsourcing ethics when their business model should be imbued with an ethical
approach, not to mention the danger of ‘ethics boredom’.

Fortunately, Jo Pedder’s blog post refers to a plan to set
up a research fund which will fund research in this area (among others) and I
hope that produces more satisfying answers on AI. I also hope that SCL members
will contribute to the continued debate on the interface between AI and data
protection. These pages are open for that debate.