Well it’s here at last. GDPR is finally in force and we can all catch our breath while we wait for the courts, supervisory bodies and enforcement agencies to work out what it actually all means in practice. Those (unlawful?) consent emails we’ve been bombarded with will hopefully stop. Slightly miraculously, the Data Protection Act 2018 has also made it to the statute books just in time. So well done to anyone who has been drowning in privacy documentation, retention schedules and the like for the last year – you can come up for air now.
So this month I want to move on from GDPR. We all know implementation is just the start and I’m certain I’ll return to the topic in the near future. But those of you who have been able to keep up with our website will have noticed a run of excellent articles on AI. With the hype for GDPR dying down, the legal implications of AI may well come to the fore again. Of course data and AI are intrinsically connected, the wholesale collection of data to train algorithms being one of the primary drivers for the GDPR in the first place.
Yet AI raises many other opportunities and risks aside from data protection as those recent pieces on scl.org have shown. Peter Leonard and Toby Walsh looked at the ethical lessons to be learned from the Cambridge Analytica saga in #MeToofor AI? Could Cambridge Analytica happen again? The same week we published Terence Bergin QC and Quentin Tannock’s look at some of the changes to both regulation and the common law that might flow from AI in fintech, in particular that around vicarious liability (see AI: Changing Rules for a Changed World)
Perhaps most importantly we have had some concrete recommendations on how to ensure that the UK makes the most of AI with publication of the House of Lords Select Committee Report, AI: Making the UK Ready Willing and Able. Lord Tim Clement-Jones chaired the committee and has written an excellent summary for us drawing out the main themes of the report. He makes the point that a lot of lip service has been paid to ethical development of AI but the time has now come for concrete action. He also stresses the need to avoid data monopolies and to ensure that algorithmic decision making does not replicate the biases of the past. All told, the report makes 74 recommendations and we are led to believe that the Government will accept almost all of them which must be a testament to the wise views put forward.
Of course a single report will not provide all the answers: it is merely part of an ongoing discussion, one that we are pleased to say will continue both on scl.org and at the SCL Annual Conference. One of the centrepieces will be a discussion panel on AI chaired by Lord Tim Clement-Jones himself.
AI and its use both by ourselves as lawyers and by our clients, whatever their industry, is going to become a constant feature of the tech lawyer’s life so the Conference will be a timely chance to hear what’s happening at the cutting edge and to join in the debate. I hope to see you there.
PS: On the theme of participation, we are looking for contributors to a new format busting flash talk session at this year’s Conference. Three speakers will have 10 minutes on a topic of their choice. We are looking for hot topics, inventive presentations and tangible learning for delegates to take away from the session (think “all you need to know about x”). We are particularly looking to encourage new voices who have not spoken at the Conference before so, whatever your background, if you think you have something to say about tech law and have no fear of a large audience send your thoughts to hello@scl.org