This issue has snowballed into The Big Question of 2015. Last year’s warning by Google CEO, Eric Schmidt, that machines would take non-creative jobs from humans struck a particularly emotional chord, and prompted widespread speculation as to whether humans will survive developments in artificial intelligence.[1] Even great minds like Stephen Hawking[2] and Elon Musk[3] have warned that we must remain constantly alert to the capabilities of machines and the uses to which they are put, if we humans are to ensure our survival. Not all seem to share their concern on this front. After all, ‘The Singularity’ is the term used by believers to describe the moment when machines finally out-compete humans to the point of extinction. In their version of the future, humans create machines and robots which themselves build better and better machines until they become autonomous. Having achieved independence from their human masters, the machines simply no longer support them. One leading prophet of human doom, Stuart Armstrong, reckons ‘there’s an 80% probability that the singularity will occur between 2017 and 2112’.[4] Far from leading the rebellion against the robotic onslaught, Silicon Valley plays host to this vision. Indeed, the Singularity University has been established by corporate founders, including Google and Cisco ‘to apply exponentially growing technologies, such as biotechnology, artificial intelligence and neuroscience, to address humanity’s grand challenges.’[5]
Meanwhile, the Society for Computers and Law is also doing its best to focus attention on the question whether machines could shake off the constraints of humanity. On 2 March 2015, SCL will launch its own Technology Law Futures Group with the speech ‘Superintelligence – a godsend, or a doomsday device?’ by Professor Nick Bostrom, of Oxford University’s Future of Humanity Institute and Programme on the Impacts of Future Technology. This will be followed by the SCL Technology Law Futures Conference in London, on 18 and 19 June 2015, when we will explore how to keep humans at the heart of technology. We will examine the roadmap to ‘superintelligence’, the concept of humanity-by-design, the rise of the ‘P2P’ or human-to-human economy and consider what should be the appropriate rules governing developers and the machines they create.
Last, but not least, the Media Board has been considering how to organise the SCL’s articles and other material conveniently for those seeking to put humans at the centre of the machines and applications of the future – a ‘developer’s guide to humanity’, as it were.
There is an abundance of SCL material that will be helpful in designing and developing humane technology. Sure, one can find it easily enough via the web site now, if you know what you’re looking for. But given what’s at stake it seems appropriate to publish more obvious links to key developments in privacy, authentication, security, big data, midata, various media and devices, the Internet of Things, drones, driverless cars, biometrics and so on.
Organising the SCL’s material in this way might also inspire others to add consistent material to deepen the database or organise events to highlight key issues. And we might even inspire a few developers to hard-wire humanity into their creations.
Simon Deane-Johns is a consultant solicitor with Keystone Law and Chair of the SCL Media Board.
[1] http://www.bbc.co.uk/news/business-25872006
[2] http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence–but-are-we-taking-ai-seriously-enough-9313474.html
[3] http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat
[4] http://fora.tv/2012/10/14/Stuart_Armstrong_How_Were_Predicting_AI