Out-Law Analysis 7 min. read

AI in financial services: addressing the risk of bias


Financial services firms need to consider what business outcomes will arise from using AI technology, how this will impact their ability to treat customers fairly, and whether they can maintain transparency and accountability in their decision making processes.

Addressing these considerations will help assist financial services businesses to effectively manage the well recognised risks of unfair bias and discrimination and also assess the level of contractual protection they should seek when engaging with AI suppliers.

There are due diligence steps firms can take, and contractual options open to them.

Where is AI being used in financial services?

AI technologies are currently used across both front and back office operations, some of these include monitoring user behaviours, recruitment processes, insurance decision making, credit referencing, underwriting loans and anti-money laundering and fraud detection processes. AI is also being embraced in the capital markets. According to the IMF, two thirds of cash equity trading is now associated with automated trading.

Given the vast scope and scale of data entering financial services businesses, it is important to consider whether AI tools are appropriately making decisions that are not biased or skewed to avoid legal claims, fines from regulators and deep reputational damage.

These are well-recognised risks. According to the European Banking Authority (EBA), "the use of AI in financial services will raise questions about whether it is socially beneficial [and] whether it creates or reinforces bias" and the Centre for Data Ethics and Innovation's AI barometer has reported that "bias in financial decisions was seen as the biggest risk arising from the use of data-driven technology".

Some firms have already faced scrutiny for some of AI-led decision making, which some have perceived as biased against particular groups of people.

What is bias?

Definitions of bias differ and depend on the context in which they are used. The EBA has referred to bias as "an inclination of prejudice towards or against a person, object, or position." In contrast, a European Commission technical definition describes bias as "an effect which deprives a statistical result of representativeness by systematically distorting it". A Cambridge English dictionary definition sets out that bias is "the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment". Commonly, these definitions all highlight that, in reality, bias can arise in AI systems inadvertently and unconsciously as a downstream process.

How does bias and discrimination occur in an AI context?

AI systems are built on sets of algorithms that "learn" by reviewing large datasets to identify patterns, on which they are able to make decisions. In essence they are as good as the data they feed on. There are a number of ways in which AI systems can develop bias. A few of these include:

  • incomplete data –where the AI has been trained on incomplete or imbalanced data that is not representative of the general population. This is sometimes labelled 'unrepresentative sampling', which is where certain groups are either under- or over-represented in a dataset;
  • biased datasets – the data being trained on is based on previous biased decision making processes, and will tend to favour or discriminate against applicants or users based on previous decisions. The data may be accurate in and of itself, but it may only be an accurate representation of historic biased behaviour;
  • bias training – inadvertent coding of biased rules by programmers. This could occur where the programmers do not consider what the business outcome or impact of the decisions may be and inadvertently code their biases into the solution;
  • biased policies – business policies that underpin AI decision making may lead to prudent financial decisions but, where the business outcome has not carefully been considered, this could inadvertently lead to bias.

All of these examples of bias can lead to discrimination in financial services. The EBA has highlighted the circumstances of a class of people less represented in a training dataset receiving less or more favourable outcomes as a result of what an AI system has learned as one example of such potential discrimination.

What due diligence steps should a customer undertake?

UK financial regulators have not yet provided detailed guidance on the steps they expect regulated firms to take but, along with other industry bodies, they have provided an indication on the steps they would expect firms to take in their engagement with AI solutions.

Multi-discipline procurement teams

When procuring AI systems it would be useful to assemble a team of individuals covering multiple disciplines. The Office of AI in the UK, for example, recommends requiring suppliers to assemble teams that could include individuals that have domain expertise, commercial expertise, systems and data engineering capabilities, model development skills – for example in deep learning, data ethics expertise and visualisation or information design skills.

Diversity, culture and training

A potential customer of an AI solution should also consider how diverse the supplier's programming team are and whether or not they undertake relevant anti-bias and discrimination training. This will draw upon perspectives of individuals from different genders, backgrounds and faiths, which will increase the likelihood that decisions made on purchasing and operating AI solutions are inclusive and not biased.

Whether the supplier has an open and progressive culture that incentivises and encourages their developers to spot errors arising from the AI solution may also be a factor to indicate that adequate processes to protect against bias are in place.

Governance

Regulators will also be keen to see that businesses have the appropriate oversight functions and controls in place. Firms should ask suppliers what controls and monitoring tools they have to ensure that new data entering the data pool is of high quality, and how this could be reported on and reviewed during governance meetings. Some businesses have developed tools aimed at determining whether a potential AI solution is biased or not.

Transparency and documentation

From a compliance perspective, organisations should document their approach to tackling bias and discrimination from the outset and keep this under review at each of the main stages of an algorithm's development and use.

Different levels of insight into the decision making process underpinning the solution will be required to reflect different needs. For example, the information reviewed by the board of directors would focus on assisting them to determine whether appropriate business outcomes are being achieved, whilst a technical analyst will need more detailed technical information to determine whether the coding and  datasets are producing fair and accurate results. Guidance produced in the UK has stressed the importance of explainability of AI-driven decision making.

Accurate, robust and contemporaneous record keeping is important to enable firms to prepare for potential disputes that could arise in the future.

Impact assessments

Where AI tools make decisions based on customers' data, organisations will need to undertake an impact assessment of the AI technology and consider how the decision making process may impact their customers, particularly if they are vulnerable, and whether or not the decisions are transparent and explainable. The EBA for example has highlighted that "adequate scrutiny of and due diligence on data obtained from external sources" could be included in risk assessments.

Fair treatment of customers

The Financial Conduct Authority (FCA) has indicated that directors need to consider that, on top of how transparent AI decision making is, "what the business outcome will be" when engaging with AI technologies.

Looking at robo-advice as an example, this has been seen as a low cost and highly efficient way of assisting consumers who would benefit from financial advice but are either unwilling or unable to pay for that advice to better manage their money and make more informed investment decisions. One of the considerations firms have had to take is that the customers who fall within the investment-advice gap include vulnerable people.

Suitability safeguards need to be applied to ensure that customers are protected and the right business outcomes are achieved.

Track record

More generally, a customer should review the company's AI experience in the market, and whether they have scaled AI models to meet a customer's requirements in parallel to managing bias and discrimination risk on a larger scale.

Contract considerations

While a significant amount of the risk presented by AI technologies cannot realistically be dealt with at a contractual level, some core issues can be addressed. Where a customer is buying development services for an AI solution there are a complex balance of risk factors, upon which the customer and the supplier will need to negotiate. This is particularly the case where the AI technology is partially trained on customer or third party data.

Some of the basic contractual options open to firms when considering bias risk include:

  • considering what certifications and standards the supplier adheres to and how these can be tied to their governance and reporting obligations;
  • including obligations for programmers to undertake regular anti-bias/discrimination training;
  • evaluating the commitments the supplier is willing to make about the tools and processes that it would use to monitor and evaluate the data quality entering the data pool and the controls it has in place to protect against the risk of biased data-pools developing. This would include a consideration of the processes it has put in place to test the accuracy, completeness and appropriateness of data used in terms of bias risk;
  • assessing the level of commitment the supplier is willing or unwilling to give in relation to the accuracy, reliability, currency and completeness of both the 'input data' used by the AI system and its outputs.

Opportunities and challenges ahead

While AI can assist in automating decision making processes and delivering cost savings, firms should carefully consider the AI tool being sourced and commit resources towards monitoring the solution to ensure that biased decisions are not being made.

Businesses will need to review and further understand who their customers are, what demographics they fall into and the social challenges they face in order to develop a transparent and accountable platform that drives good outcomes for customers.

Businesses should also consider engaging with collaborative industry initiatives to share best practice and knowledge on the development of AI. Given that the law in this area is likely to change in light of advancements in AI technology, particularly trust in AI, industry bodies such as UK Finance have advised that it is important to be aware of the potential changes in legislation and contribute to the dialogue in this area.

The bias in AI systems presents both a challenge and opportunity for technology developers and business users – the challenge being to translate non-discriminatory human values into coding. The opportunity would therefore be the development of AI tools which humans can trust. This will translate into a greater uptake of AI tools by businesses across all sectors and provide opportunities for competitive advantage.

Hussein Valimahomed and Luke Scanlon are experts in AI in financial services at Pinsent Masons, the law firm behind Out-Law.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.