Out-Law / Your Daily Need-To-Know

Out-Law Analysis 3 min. read

Prohibited AI practices should spur financial services lobbying


EU plans to ban certain types of artificial intelligence (AI) tools could have unintended consequences for the operation of digital products in financial services which businesses in the sector should consider lobbying to avoid.

The European Commission’s proposals may be directly relevant to how banks, insurers and fintechs influence customer decision making and to their treatment of vulnerable customers too.

The draft AI Act and prohibited practices

The risk-based regulation of AI proposed by the European Commission in April this year is relevant to businesses across sectors, including those operating in financial services. The draft AI Act that has been published includes plans to ban some AI systems from sale or use in the EU completely. This concerns AI practices deemed to present an “unacceptable risk” to people.

The draft proposal sets out specific bans on the use of certain types of AI, where they may:

  • have a significant potential to manipulate persons through subliminal techniques beyond their consciousness in order to materially distort a person’s behaviour in a matter that causes or is likely to cause that person or another person physical or psychological harm, or
  • exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm. 

The draft regulation does not contain further detail to help businesses understand what practices will be considered unacceptable. There is no explanation, for example, of what may be considered a “subliminal technique” beyond a customer’s consciousness. No parameters are given either to determine when such a technique could be viewed as the cause behind material behavioural change in a customer or psychological harm.

 The lack of clarity around the use of these terms will raise significant concerns for product development teams and marketing departments. A review of current practices may even identify techniques which could broadly fall within the idea of a subliminal technique that materially changes a person’s behaviour.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

A greater understanding and closer risk assessment may be necessary to differentiate between nudging techniques that are acceptable and those that may have the potential to cause psychological harm

‘Nudging’, for example, has been used in financial services. The Financial Conduct Authority (FCA) uses the term in a positive sense to encourage pension providers to nudge consumers to consult guidance before accessing their pension savings. However, a nudge may take various other forms.

It could be that a product is designed so that the customer is provided with additional information when a specific life-event occurs. It may be that another product changes its default settings in response to a customer’s spending or savings habits. It may also be that other products use personalised incentives to encourage purchases. All of these nudges may lead to a material change in behaviour by the customer.

While nudges can be used to encourage financial consumers to act in their own interests, a greater understanding and closer risk assessment may be necessary to differentiate between nudging techniques that are acceptable and those that may have the potential to cause psychological harm, for example if financial losses were to occur.

Identifying vulnerable customers

The focus of the EU proposals on addressing exploitation of vulnerabilities is consistent with separate developments taking place in various jurisdictions which encourage financial services providers to take steps to protect customers experiencing financial difficulty. In the UK, for example, the FCA has issued guidance on the fair treatment of vulnerable customers which considers the impact of automated solutions and technology.

As financial businesses take steps to protect vulnerable customers, they should be aware of the direction of travel which the EU has set upon in respect of the use of AI technology and consider reviewing their processes to assess whether they are effectively identifying practices which may present an unacceptable level of risk.

If purchasing an AI solution, these processes should work towards building in safeguards and protections to ensure vulnerable customers are treated fairly. Identifying the right questions should form an early part of this process. Those questions may include:

  • how does the solution identify vulnerable customers?
  • how does the solution learn from vulnerable customer data?
  • what controls are in place to ensure children are not accessing the solution?
  • can the solution be embedded into existing systems that also function to assist vulnerable customers – i.e. a system to prioritise vulnerable customer queries?
  • is information presented to the customer in an easily accessible format i.e. in respect of font and text size?
  • can the solution link to other resources that may be useful to vulnerable customers?

Initial due diligence can then shape the level of warranties that are required in licencing agreements, the level of support required for vulnerable customers, and whether or not specific service levels are needed. To effectively monitor the service, meaningful reporting parameters may need to be agreed from the outset, which could include the number of vulnerable customers identified, the number of complaints raised, and whether or not the AI tool is able to learn from previous errors.

As the EU continues to develop its approach towards regulating AI there is good reason for regulated businesses to begin to embed processes which differentiate between uses of AI which pose real risks to consumers from those that do not.

Co-written by Hussein Valimahomed of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.