Mustafa loves good coffee. In his free time, he often browses high-end coffee machines that he cannot currently afford but is saving for. One day, travelling to a friend’s wedding abroad, he gets to sit next to another friend on the plane. When Mustafa complains about how much he paid for his ticket, it turns out that his friend paid less than half of what he paid, even though they booked around the same time.
He looks into possible reasons for this and concludes that it must be related to his browsing of expensive coffee machines and equipment. He is very angry about this and complains to the airline, who send him a lukewarm apology that refers to personalised pricing models. Mustafa feels that this is unfair but does not challenge it. Pursuing it any further would cost him time and money.
This story — which is hypothetical, but can and does occur — demonstrates the potential for people to be harmed by data use in the current ‘big data era’. Big data analytics involves using large amounts of data from many sources which are linked and analysed to find patterns that help to predict human behaviour. Such analysis, even when perfectly legal, can harm people.
Mustafa, for example, has likely been affected by personalised pricing practices whereby his search for high-end coffee machines has been used to make certain assumptions about his willingness to pay or buying power. This, in turn, may have led to his higher-priced airfare. While this has not resulted in serious harm in Mustafa’s case, instances of serious emotional and financial harm are, unfortunately, not rare, including the denial of mortgages for individuals and risks to a person’s general credit-worthiness based on associations with other individuals. This might happen if an individual shares some similar characteristics to other individuals who have poor repayment histories.
Instances of emotional harm can also occur. Imagine a couple who find out they are expecting a much-wanted child, but suffer a miscarriage at five months. The couple may find they continue to receive advertisements from shops specialising in infant products months later, celebrating which should have been key ‘milestones’, causing distress. This is another hypothetical but entirely possible scenario.
A Ben and Jerry’s advert said my name because I’m automatically signed up for personalised advertising. I hate technology
— iuris (@conjectures) August 13, 2018
The law — or lack of it
In many of these cases, because the harmful practice may not have broken any laws, those who were harmed by data use have limited or no legal options open to them. What happened to Mustafa, for example, was perfectly legal, as there are no current laws forbidding personalised pricing as such. Our current legal systems do not adequately protect people from the harms emerging from big data.
This is because it is very difficult to trace how our data is linked and used. Even if the airline company had done something unlawful, such as broken data protection laws, it would be near impossible for Mustafa to find out. People who feel they have been harmed by data use may struggle to show how their data has been used to cause this harm, which data was involved or which data controller used it. And so they may lack the proof they’d need to get a legal remedy.
Furthermore, even if they show how something someone did with their data harmed them, that particular use of customer information, to adjust pricing for example, may not be unlawful.
Equally, the harm may be caused not by one’s own data but by the use of other people’s data (third-party data). For example, in Mustafa’s case, it might be that other individuals who were also interested in expensive coffee machines had very high incomes, or bought expensive items. This may have been used to suggest that Mustafa also fell into this category, which may have resulted in higher prices for him for other products. An individual harmed through the use of third party data will often not have remedies under current data protection laws.
A new system
To help address such issues, we argue that we need to accept that some risks from data usage are not preventable. Instead of focusing solely on trying to minimise or avoid such risks, we also need to find ways to better support people who suffer harms from data use, for example by actively monitoring and responding to harms caused by data use, including legal uses of data.
We think that as part of this system, a new type of institution should be set up. We call them harm mitigation bodies. These would be set up at a national level, and people who felt they were harmed by data use, but did not qualify for legal remedies, could go to them to report the harm they think arose from data use. Unlike traditional remedies, harm mitigation bodies could provide support even in cases where no laws were broken. They would be easy for people to use, and flexible, so that they could support people where and how they most need it, giving individuals more power and strengthening collective responsibility for data use.
These proposed bodies would collect information on what types of harm occur. Currently, there are no national or international bodies that collect information on data harms systematically. They would also feedback information to policymakers and data users to help improve how things are done. And in cases where people suffer financial harms but do not have access to legal help, they might provide financial support as well.
Big data analytics is rightly lauded for the many new opportunities that it offers. But it will be inevitable that some people will be harmed. As a society, we need to face this truth, and provide better assistance to those who suffer harms, so that nobody who bears the costs for these new practices is left alone.