One of the key themes of the 2015 Technology Futures conference was that of ‘keeping humans at the heart of technology’. And, while there was discussion of some rather futuristic concerns about technology (including the idea of ‘superintelligence’ for example, in which machines take on an existence of their own and wipe out human existence – probably many years away, if ever), there was also discussion of some of the nearer term risks of technology and the impact which technology can have in shaping the society in which we live.
In particular, there were a number of references to the risks of mass unemployment as machines gradually take more and more jobs. Even without superintelligence, or even some lesser form of artificial intelligence, one can readily appreciate that, when a computer can do a particular task, or a task can be modified to make it computer-performable, it is probably cheaper to use a computer rather than a human to do that task: computers do not need rest periods or holidays, and they can be scaled both up and down easily, for example. (See, for example, Andrew Keen’s ‘The Internet Is Not The Answer‘, which I reviewed for SCL here.)
But what happens to those whose jobs are replaced? How do they earn a living? Or are we just going to consign increasing numbers of humans to the ‘unemployable’ scrap-heap, replaced by computer counterparts.
One approach to thinking about this issue, as advisors to those developing new technologies, is perhaps to ‘be more Luddite’.
‘Being more Luddite’
Luddism is a misunderstood concept, with the phrase ‘Luddite’ generally being used to refer to someone who chooses not to adopt new technologies. But this usage is to misstate, or to misunderstand, what Luddism was about and, in suggesting that we ‘be more Luddite’, I am not suggesting that we should become anti-technology. Luddites were not concerned about technology as such. Rather, they were concerned about machines ‘hurtful to Commonality’. Luddites were looking for recognition that innovation should be about more than simply profit maximisation, and that greater recognition should be given to the impact which technology had on the lives of the average person (see Kirkpatrick Sale’s ‘Rebels Against The Future‘).
The revolution of the machines in the early 19th century (the Spinning Jenny etc) was fundamentally a labour revolution: a change from individual workers and cottage industry to factories. This was coupled with enclosure, the privatisation and fencing-off of resources which were previously used by the public (see David Bollier’s ‘Silent Theft‘). It was a period of huge change, bringing about a substantial move in lifestyle from self-sufficiency, growing one’s own crops and keeping one’s own animals, to a system of consumerism and commerce.
It would not necessarily be right to place technology as central to this, but it certainly played a part, and the industrial revolution, coupled with enclosure, shaped the world we live in today. Indeed, we are very much used to dealing with enclosed public goods, in the form of copyright and patents: works which would otherwise be in the public domain enclosed and encumbered by a system of control and restriction, giving exclusive rights for a period of time. And, on top of this, we build digital barbed wire (as James Boyle put it, in his book ‘The Public Domain‘) in the form of DRM.
The Luddites were not asking for the end of technology. But they were asking for technology to be developed with humans in mind, to innovate in a way which was beneficial to commonality, not hurtful. Machines which made everyone’s lives better, not just the lives of a few. Machines which made jobs easier, not fewer.
As lawyers, we are used to thinking about the impact of ideas and technologies — we are used, for example, to performing privacy impact assessments, in an attempt to design technologies in ways which impinge as little as possible on the fundamental right of privacy, or to expect government to undertake regulatory impact assessments before legislating, to ensure that the laws are necessary and proportionate.
The ‘human impact assessment’
Perhaps we need to add another impact assessment to that mix: the human impact assessment. As we consider innovation, perhaps we need to take a step back, and consider its impact on society. I am not for a moment going to pretend that that is easy, since predicting whether any given technology is going to be successful is a gamble, let alone predicting the way in which such technology might succeed — you’ll remember that Twitter, for example, started life as an SMS system, and WhatsApp as a status notification on mobile phones. As anyone who thinks about regulation will appreciate, ex ante regulation is a particularly complex area, and attempting to shape markets and technologies in a forward-looking manner is full of difficulties and risks, all the more so when one is talking about the regulation of nascent technologies or newly-established markets. Similarly, what does ‘good’ technology look like? I doubt that there is a common consensus on what is beneficial and what is not. But difficulty does not mean that we should decline to think about something.
What are we considering in a human impact assessment? Fundamentally, a human impact assessment would aim to bring consideration of the question which the Luddites asked: is the innovation before us hurtful to commonality, or beneficial to it? It requires us to take a step back from the minutiae of a given technology, away from specific legal problems, and look at the bigger picture. If we imagine that we are racing towards a society where more and more jobs will be done by computers, who is stopping to think about the harm for those who jobs are replaced? Do we want to bring back the poorhouse, or mass unemployment? Do we want to be vesting so much power in the hands of those that control the code? (See Laurence Lessig’s ‘Code v2.0‘.) Could the technology be used to benefit everyone, rather than just a few?
I am not suggesting being anti-technology. But I am suggesting that it is incumbent on us all to do what we can to ensure that our technology is not anti-human. To keep humans at the heart of technology, we may need to go out of way to put them there.
Please take this as something on which to ponder and debate and discuss, rather than as a firm proposed solution to a problem — or even a guarantee that there is a problem which needs addressing!
Neil Brown is an experienced telecoms and technology lawyer at a global communications company and is writing his PhD on the regulation of over the top communications services.