Should we still be talking about ‘privacy’ in a world invaded by bastard data? We all knew what privacy was when it came to our data. We had our names and addresses, we had our store cards, we had our medical records, we had our insurance, we had our travel tickets, and the list goes on. Some companies and government agencies had that data to carry out some specific tasks and these data needed to be protected to avoid misuse. We knew what our data were. We knew what we had to hide or protect. We knew the dangers and the benefits.
In a highly interconnected world of ‘big data’ the situation has changed fundamentally. Data are merged and compared. New data are generated and these, in turn can be compared with new data sets, with further new data being collected. Data have become fertile and has bastard offspring that create new challenges that go far beyond what society previously considered (and, unfortunately, still considers) to be ‘privacy’.
Big bastard data has fundamentally changed the nature of personal data. It has changed for individuals and it has changed for businesses and governments. It has also seismically changed the unavoidable imbalance between the amount of data we know of and ‘control’ about ourselves and the amount of data that governments and businesses control about us. This also has consequences for the balance of sensitive data that we control about ourselves compared with the data controlled by companies/governments, the balance of data about each individual’s possible awareness of their own future health and well-being compared with that controlled by companies/governments and even the balance of how power is controlled in a ‘democracy’.
For example, imagine you are ready to start a new regime of exercise and healthy living and want to use the newest fitness tracker. Maybe data analysis that you don’t have access to will indicate that you are likely to become a better insurance risk and your premiums should be lower. Or maybe it will indicate that you know you are unfit and, as with most good intentions, your efforts to get fit will probably fail and you have identified yourself as somebody whose premiums should be higher. From a business perspective, the guesses don’t need to be accurate or fair, they just need to be accurate enough to be profitable.
One man posted a query on Reddit asking why his wife’s fitness tracker was producing odd readings. Other redditors ‘with more data’ about such apparent anomalies were able to tell her something she didn’t know: she was pregnant. An article from PC World lamented the ability of owners Jawbone and Withings apps to hide or manipulate their own data as ‘bad news for health insurers, some of which have begun to use fitness tracker data to offer lower premiums, and courts, which have admitted the data as evidence in a number of cases’. Of course, if some people have lower premiums, others have higher premiums.
Can it really be simply a ‘privacy’ issue when an individual would not be able to guess that the data exists? Can an individual be in control of data whose nature and existence is unknown and, often, unknowable? How much more should companies be able to know or guess about you than you know about yourself?
The invasion of the bastard data can also hit us at a higher level in our societies. Facebook’s infamous ‘mood experiment’ has shown that the company is prepared to impute unexpected meanings to its terms of service to carry out ‘research’ on its users, in order to test its ability to use its (trade secret) algorithms to undertake mood altering of tens of thousands (and potentially millions) of its users. It has successfully tested its ability to change the turnout in an election – for the moment (as far as we know), in an apparently neutral way, but there is no law that would stop them from being paid (or choosing, for its own business reasons) to promote Socialists or Conservatives or Liberals or Donald Trump. Analysis suggests that Google could exercise a similar power. And all of this is before the orgy of web browsing data, search history, facial recognition, GPS data, mobile location data, social media interactions, vehicle number plate recognition, online purchases…
What are our political leaders doing to deal with this? The UK Conservative Party spent £100,000 per month on Facebook ahead of the last election. Ahead of the review of the European Union’s ePrivacy Directive, the European Commission Director General responsible for that instrument said that ‘Europe needs to build the data factories of the future’. Do we need ‘data factories’ or do we need autonomy, control, trust and defence of democracy in light of these new threats? Do we have the tools to defend ourselves from the rise of the bastard data? Do we even have the vocabulary to talk about it?
Ask Banksy.
Joe McNamee is the Executive Director of the EDRi: https://edri.org/ This article first appeared as an editorial on the EDRi website.