Insurance premiums may soon be decided by artificial intelligence and personal profiles created from social media data — if they aren’t already — according to researchers concerned about the industry’s potential misuse and lack of transparency regarding the new technologies.
These technologies, they say, could lead to people being disadvantaged for reasons that have nothing to do with their riskiness — such as the type of phone they use — or to indirectly discriminate against people for protected characteristics such as religion.
Insurance companies overseas are already using data from sources including social media to decide how much to charge customers and whether to underwrite policies. Whether Australian companies have adopted similar methods is not known.
That’s a problem, according to academics Dr Zofia Bednarz and Dr Kayleen Manwaring, who have been researching the impacts of the insurance industry collecting data from non-traditional places that Australians aren’t aware they’re sharing.
Get daily business news.
The latest stories, funding information, and expert advice. Free to sign up.
Dr Bednarz told Crikey it’s safe to assume that Australian insurance companies are watching their international counterparts and investigating or even already using new sources of data and analysis tools.
“The insurance industry has always been heavily interested in data. But now we’re living through a time in which there’s an increase in the sheer amount of data and the means to analyse it,” she said. The Insurance Council of Australia did not immediately respond to a request for comment.
Dr Bednarz pointed to the use of customer loyalty programs as an example of how insurance companies seek new forms of data about customers. Loyalty schemes as used by Coles and Qantas collect information such as social media accounts, locations, purchases, flight details, use of inflight entertainment systems and browsing history, which can be used to create profiles of an individual’s health, personality and behaviour.
If that alarms you, consider how advances in technology might provide similar insights without an individual consenting to be part of a rewards program. Technology like artificial intelligence and other tools slurping up huge amounts of data like social media posts is the next big thing for insurance companies, says Dr Bednarz.
She says the issue with these new technologies is that they’re opaque, sometimes even to the people running them and the companies using them, and they may be hurting people for unfair reasons. Dr Bednarz says that an individual might be charged more because they don’t have a large digital footprint for companies to base decisions on.
Artificial intelligence can be trained on data sets to recognise patterns that humans might not pick up. In the case of the insurance industry, it might provide a list of customers and their claims to predict who is likely to make more claims in the future. The issue with this technique is that an AI could base its decisions on spurious or even erroneous connections, and it’s very difficult to unpick how those decisions were made, even for people within a company — let alone by a customer.
Plus, there’s a possibility that establishing a connection between two factors may act as a proxy for protected characteristics. Take an example of an AI determining whether people living in a certain area were more likely to have car accidents. While they might be correlated, the causal factor could be the person’s religion — characteristics that are protected by anti-discrimination law — because there’s a community of people who live in that area. In making this decision, an insurance company could be illegally discriminating without being aware of it.
“Those models are extremely complicated. It’s impossible for people who design the algorithm to know what’s going into the decisions. In some cases, insurers could be using third-party models, so they don’t even know what’s happening,” Dr Bednarz said.
According to Dr Bednarz, the solutions to these potential issues involve restricting insurance companies from using external data and mandating transparency around the use of machine learning. While there are often problems regarding people and companies using information illegally (like facial-recognition company Clearview AI illegally scraping people’s faces off social media), ensuring that companies have to show their working for decisions would make a difference.
Regulating these companies might be welcomed by the industry. In the past, they’ve called for standards because, Dr Bednarz says, investing in these technologies is costly and risks being banned unless there are clear guidelines. The federal government’s Consumer Data Right program is one example.
“It’s a good idea, but you can’t feed more data into these companies in an insurance context without any kinds of restrictions,” she said.
This article was first published by Crikey.