‘Nudging’, for example, has been used in financial services. The Financial Conduct Authority (FCA) uses the term in a positive sense to encourage pension providers to nudge consumers to consult guidance before accessing their pension savings. However, a nudge may take various other forms.
It could be that a product is designed so that the customer is provided with additional information when a specific life-event occurs. It may be that another product changes its default settings in response to a customer’s spending or savings habits. It may also be that other products use personalised incentives to encourage purchases. All of these nudges may lead to a material change in behaviour by the customer.
While nudges can be used to encourage financial consumers to act in their own interests, a greater understanding and closer risk assessment may be necessary to differentiate between nudging techniques that are acceptable and those that may have the potential to cause psychological harm, for example if financial losses were to occur.
Identifying vulnerable customers
The focus of the EU proposals on addressing exploitation of vulnerabilities is consistent with separate developments taking place in various jurisdictions which encourage financial services providers to take steps to protect customers experiencing financial difficulty. In the UK, for example, the FCA has issued guidance on the fair treatment of vulnerable customers which considers the impact of automated solutions and technology.
As financial businesses take steps to protect vulnerable customers, they should be aware of the direction of travel which the EU has set upon in respect of the use of AI technology and consider reviewing their processes to assess whether they are effectively identifying practices which may present an unacceptable level of risk.
If purchasing an AI solution, these processes should work towards building in safeguards and protections to ensure vulnerable customers are treated fairly. Identifying the right questions should form an early part of this process. Those questions may include:
- how does the solution identify vulnerable customers?
- how does the solution learn from vulnerable customer data?
- what controls are in place to ensure children are not accessing the solution?
- can the solution be embedded into existing systems that also function to assist vulnerable customers – i.e. a system to prioritise vulnerable customer queries?
- is information presented to the customer in an easily accessible format i.e. in respect of font and text size?
- can the solution link to other resources that may be useful to vulnerable customers?
Initial due diligence can then shape the level of warranties that are required in licencing agreements, the level of support required for vulnerable customers, and whether or not specific service levels are needed. To effectively monitor the service, meaningful reporting parameters may need to be agreed from the outset, which could include the number of vulnerable customers identified, the number of complaints raised, and whether or not the AI tool is able to learn from previous errors.
As the EU continues to develop its approach towards regulating AI there is good reason for regulated businesses to begin to embed processes which differentiate between uses of AI which pose real risks to consumers from those that do not.
Co-written by Hussein Valimahomed of Pinsent Masons.