Out-Law News | 07 Jul 2021 | 11:58 am | 4 min. read
The European Insurance and Occupational Pensions Authority (EIOPA) has outlined six principles to promote the ethical and trustworthy use of artificial intelligence (AI) in the insurance sector.
The six principles (92 page / 1.19MB PDF) build on existing insurance principles and requirements, principles of the General Data Protection, and the European Commission’s High Level Expert Working Group’s ethics guidelines on trustworthy AI as well as the European Commission’s White Paper on AI.
They focus on proportionality; fairness and non-discrimination; transparency and explainability; human oversight; record keeping, and robustness and performance. EIOPA has also published non-binding guidance on how to implement the principles through an AI system’s entire lifecycle.
The EIOPA expert group on digital ethics which produced the report said insurance firms should conduct AI use case impact assessments to determine the proportionate governance measures required for a specific AI tool. The group has formulated an AI use case impact assessment which can be used by insurance firms. This takes into account the impact of AI applications on both consumers and insurance firms as the use of AI poses risks for both.
The impact of an AI system is determined by the potential for harm on the basis of a two part investigation into the severity of the harm and the likelihood that harm will occur. Three levels of likelihood and severity have been considered: high, medium and low, with the option for insurance firms to define further levels. The group said that it expects that many of the recommendations in the report would apply only to those use cases that have a higher impact on consumers and insurance firms.
The guidance’s principle of proportionality says that: “Insurance firms should then assess the combination of measures put in place in order to ensure an ethical and trustworthy use of AI”.
The proposed assessment follows existing principles and processes. The assessment on the impact on consumers follows Article 29 of the Data Protection Working Party Guidelines on the Data Protection Impact Assessment (DPIA), by following a risk-based approach and aims to address similar risks than the ones arising from AI applications.
The impact on insurance firms is based on the risks that insurance firms regularly assess under their Own Risk and Solvency Assessment (ORSA) outlined in Article 44 and 45 of the Solvency II Directive. The assessment also incorporates the AI HLEG’s recommendations to conduct a fundamental rights impact assessment (FRIA) based on a prediction of the impact, looking at the anti-discrimination and diversity considerations that are more relevant in an insurance and AI context.
The use of AI systems should take into account their outcomes while balancing the interests of all stakeholders. When looking at fairness and non-discrimination, firms should consider
financial inclusion issues and ways to avoid reinforcing existing inequalities, it said.
“This includes assessing and developing measures to mitigate the impact of rating factors such as credit scores and avoiding the use of certain types of price and claims optimisation practices like those aiming to maximise consumers’ ‘willingness to pay’ or ‘willingness to accept’,” the guidance said..
Firms should “make reasonable efforts” to reduce bias in data and AI systems, such as by using explainable AI systems or fairness and non-discrimination metrics in high-impact AI applications. Record keeping of measures put in place to address fairness and discrimination should also be maintained. GDPR principles of purpose limitation and data minimisation – data should not be used for purposes other than the purpose for which it was collected – should be balanced against the interests of the consumer.
The principle of transparency suggests insurance firms should adapt explanations to specific AI use cases and the end consumers, and explanations should be meaningful and easy to understand. This aligns with approaches taken in existing legislation and principles relating to AI.
There should be adequate levels of human oversight throughout the AI system’s lifecycle, with clear, documented roles for all staff involved. Meanwhile any data used in an AI system should be accurate and stored in a safe and secure environment, with appropriate data management records kept.
Finally, insurance firms should use robust systems with their performance monitored on a regular basis. AI systems should be deployed in “resilient and secured” IT infrastructure to help prevent against cyberattack.
EIOPA said it would use the findings of the expert group to help establish the boundaries for the appropriate use of AI in insurance, and to identify possible supervisory initiatives in this area.
The principles follow work by a number of authorities on AI principles and regulation. Earlier this year the European Commission published a draft AI Act which, if introduced, would introduce mandatory requirements for ‘high risk’ AI and a ban on the use of certain types of AI. Meanwhile the UK’s Financial Conduct Authority and Bank of England have convened the Artificial Intelligence Public Private Forum to discuss issues relating to AI and financial services, and most recently the FCA and the Alan Turing Institute published a report on transparency and AI in financial services (79 page / 942KB PDF).
The report also highlights that the financial services sector is facing a ‘pacing problem dilemma’, with regulatory and legal responses struggling to keep pace with the speed of technological developments.
Financial services expert Luke Scanlon of Pinsent Masons, the law firm behind Out-Law, said: "The speed of technological change will always present challenges to financial sector businesses looking to implement robust controls. The growing body of principle-based best practice however, which EIOPA's approach firmly aligns with, is helpful for insurers in dealing with the here and now and also having an eye on what may be required of them from a regulatory perspective in the future."
24 May 2021
31 Mar 2020