Insurers advised to build trust over use of AI

Out-Law News | 26 Sep 2019 | 12:07 pm | 3 min. read

Insurers should be open about how they use algorithms and data to help build trust over the use of artificial intelligence (AI) tools in the sector, the UK's Centre for Data Ethics and Innovation (CDEI) has said.

In a new paper, the CDEI said insurers face a "challenge ... to find common ground on what constitutes an ethical use of AI". It acknowledged there is divided opinion on whether AI in insurance is something to "laud or lament.

While AI in insurance has the potential to reduce prices for policyholders, address fraud, incentivise the take-up of insurance, open up insurance to new groups, and reduce damage to people and property where "deployed responsibly and in competitive markets", there are "obvious harms" that need to be addressed, the CDEI said.

The CDEI, an official, independent adviser to the UK government on the measures needed to maximise the benefits of data and AI for the UK's society and economy, said industry should "engage with the public to reach a consensus on what constitutes a responsible use of AI and data". This, it said, would help the sector understand consumer attitudes towards the processing of data from social media for insurance purposes and how they feel about the use of algorithms to predict people's willingness to pay higher premiums, for example.

"More work needs to be done to understand what the public views as an acceptable use of AI within the industry, including the types of data that insurers should be able to make use of," the CDEI said. "However, that should not stop insurers from taking steps today to address obvious harms. Among other measures, insurers could commit to regularly undertaking discrimination audits on their datasets and algorithms; making privacy notices more accessible so that customers know how their data is being used; and establishing clear lines of accountability within their organisations so that it is apparent who is responsible for overseeing the responsible use of algorithms."

A "sector-wide commitment to transparency" was also advocated by the CDEI.

"Without greater disclosure, insurers will struggle to build trust with customers and regulators will lack the information to design proportionate regulatory responses," the CDEI said.

"Oversight will remain patchy until the industry is more transparent about how it uses data-driven technology in day-to-day operations," the CDEI said. "Critically, greater transparency would help to distinguish genuine threats from those that are overstated, and would support the development of interventions that are proportionate to the risk in question, thereby allowing responsible innovation to flourish,".

The CDEI's paper highlighted existing moves by insurers to use AI in areas such as customer onboarding, personalised pricing and in claims management, but it said future uses could "help customers to live healthier and safer lives". It could entail AI "recommending safer driving routes or by flagging early signs of damage in the home", for instance, it said.

The CDEI said insurers could end up "collecting more information about their customers than is necessary to deliver their core services" as data from wearables and telematic devices become more available to them. It said the industry bodies could play a role in addressing the "ethical concerns" this raises.

"The industry could draw up data storage standards, possibly developed by the Association of British Insurers (ABI) or British Standards Institute, that discourage insurers from storing data that is not central to their mission," the CDEI said. "Such standards could include an expectation for insurers to review their datasets on a regular basis to determine whether they are material to their core business practice, and if not, to eliminate them from company records."

While the CDEI avoided making formal recommendations, it suggested insurers might wish to consider establishing their own ethical panels to focus on the issues raised by their specific plans to use AI rather than "generic concerns applicable to the entire industry".

It also said insurers could help customers port their non-commercially sensitive data between different insurance providers, and that they might also look to allocate individual Board members responsible "for overseeing uses of AI and other forms of data-driven technology".

For its part, the government could consider legislating to clarify what data insurers can use when conducting risk assessments on prospective policyholders, the CDEI said.

Luke Scanlon, an expert in financial services and technology at Pinsent Masons, the law firm behind Out-Law, said: "Whilst the CDEI report on AI and personal insurance gives just a snapshot of these issues, it does provide useful practical guidance on how insurers can address rising concerns around the use of AI in the sector."

"In particular, insurers should take note of the recommendation to carry out data discrimination audits and ensure they have proper oversight of any third party data suppliers, including the carrying out of due diligence checks on the data collected. All of this will help insurers have the correct processes in place that they will need going forward, particularly should the use of AI in the insurance sector become a focus of the FCA, as we expect it will," he said.