Out-Law Analysis | 10 Sep 2021 | 1:51 pm | 3 min. read
EU plans to ban certain types of artificial intelligence (AI) tools could have unintended consequences for the operation of digital products in financial services which businesses in the sector should consider lobbying to avoid.
The European Commission’s proposals may be directly relevant to how banks, insurers and fintechs influence customer decision making and to their treatment of vulnerable customers too.
The risk-based regulation of AI proposed by the European Commission in April this year is relevant to businesses across sectors, including those operating in financial services. The draft AI Act that has been published includes plans to ban some AI systems from sale or use in the EU completely. This concerns AI practices deemed to present an “unacceptable risk” to people.
The draft proposal sets out specific bans on the use of certain types of AI, where they may:
The draft regulation does not contain further detail to help businesses understand what practices will be considered unacceptable. There is no explanation, for example, of what may be considered a “subliminal technique” beyond a customer’s consciousness. No parameters are given either to determine when such a technique could be viewed as the cause behind material behavioural change in a customer or psychological harm.
The lack of clarity around the use of these terms will raise significant concerns for product development teams and marketing departments. A review of current practices may even identify techniques which could broadly fall within the idea of a subliminal technique that materially changes a person’s behaviour.
Head of Fintech Propositions
A greater understanding and closer risk assessment may be necessary to differentiate between nudging techniques that are acceptable and those that may have the potential to cause psychological harm
‘Nudging’, for example, has been used in financial services. The Financial Conduct Authority (FCA) uses the term in a positive sense to encourage pension providers to nudge consumers to consult guidance before accessing their pension savings. However, a nudge may take various other forms.
It could be that a product is designed so that the customer is provided with additional information when a specific life-event occurs. It may be that another product changes its default settings in response to a customer’s spending or savings habits. It may also be that other products use personalised incentives to encourage purchases. All of these nudges may lead to a material change in behaviour by the customer.
While nudges can be used to encourage financial consumers to act in their own interests, a greater understanding and closer risk assessment may be necessary to differentiate between nudging techniques that are acceptable and those that may have the potential to cause psychological harm, for example if financial losses were to occur.
The focus of the EU proposals on addressing exploitation of vulnerabilities is consistent with separate developments taking place in various jurisdictions which encourage financial services providers to take steps to protect customers experiencing financial difficulty. In the UK, for example, the FCA has issued guidance on the fair treatment of vulnerable customers which considers the impact of automated solutions and technology.
As financial businesses take steps to protect vulnerable customers, they should be aware of the direction of travel which the EU has set upon in respect of the use of AI technology and consider reviewing their processes to assess whether they are effectively identifying practices which may present an unacceptable level of risk.
If purchasing an AI solution, these processes should work towards building in safeguards and protections to ensure vulnerable customers are treated fairly. Identifying the right questions should form an early part of this process. Those questions may include:
Initial due diligence can then shape the level of warranties that are required in licencing agreements, the level of support required for vulnerable customers, and whether or not specific service levels are needed. To effectively monitor the service, meaningful reporting parameters may need to be agreed from the outset, which could include the number of vulnerable customers identified, the number of complaints raised, and whether or not the AI tool is able to learn from previous errors.
As the EU continues to develop its approach towards regulating AI there is good reason for regulated businesses to begin to embed processes which differentiate between uses of AI which pose real risks to consumers from those that do not.
Co-written by Hussein Valimahomed of Pinsent Masons.
07 Jul 2021
26 Jan 2021