Out-Law Analysis 13 min. read

Risk issues before considering AI for financial services


Financial institutions must consider the risks inherent in using artificial intelligence (AI) tools, and put controls in place directly or through cooperation with suppliers to manage those risks, to ensure they and their customers can benefit from the use of the technology.

This exercise will help banks, insurers and other financial institutions put in place sound policies and practices around governance, data protection and outsourcing, for example, and ensure they are well placed to comply with financial regulations as they apply to AI.

Growing interest in AI in financial services

AI technologies represent a collection of new and exciting solutions that are disrupting businesses and opening new opportunities to monetise data.

Put simply, in financial services AI involves software that can make human-like decisions, but at a much faster and more efficient rate. Trained and fuelled on data, AI can unlock new commercial and economic opportunities by introducing efficiencies and insights into core systems.

The uptake of AI technology within the financial services sector is increasing, with a recent report from the FCA estimating that two-thirds of financial service businesses are now using some form of machine learning.

There are risk management issues institutions should consider before they engage an AI service provider. These will help businesses determine the level of risk they are facing and allow them to explore these further when contracting with AI providers. Financial services businesses should implement processes and controls in proportion to the risk posed by the outsourcing. The highest level of due diligence of suppliers of AI technologies, for example, should be considered where mission critical data is being provided. Where internal non-sensitive data is the subject of the service, firms may consider adopting a tapered approach.

Use cases

Current use cases of the rapidly evolving AI technology include:

  • Consumer and commercial borrowing – using AI to make a faster and more accurate assessment of a potential borrower, at less cost, and accounting for a wider variety of factors. ZestFinance's 'Zest Automated Machine Learning' uses thousands of data points to help companies assess borrowers with little to no credit information or history. Clients include US financial services businesses such as Discover Financial Services and Prestige Financial Services.
  • Better fraud detection – DataRobot's AI tool helps financial institutions build accurate predictive models that enhance decision making around fraudulent card transactions.Clients include US Bank and LendingTree. Ayasdi's 'Ayasdi AML' machine learning application is used by customers to identify money laundering and to detect mortgage fraud.
  • Providing price forecasts – using AI to provide short-term market price forecasts for financial markets.Bloomberg, in collaboration with Alpaca, created 'Alpaca Forecast', an app which uses a deep learning engine coupled with high pattern recognition capabilities, to analyse tick data at speed which humans cannot recognise.
  • Dealing with front office customer enquiries – using AI for customer chatbots.Microsoft's Azure AI platform is used by Raiffeisen, a leading Swiss financial services company, to create chatbots which can calculate mortgages, find an adviser in a user's region, and to answer customer queries.

Risks and controls

AI can effectively carry out tasks traditionally performed by humans, but quicker and more efficiently. However, the technology is not infallible and there are instances where it may make incorrect decisions. Having the correct controls in place from the outset and ensuring service providers are able to assist can minimise or mitigate the impact of these errors.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

Businesses may wish to consider whether their existing processes help satisfy regulatory and ethical requirements of explainability or if additional measures require to be put in place

Businesses should consider internally what systems they have in place to mitigate some of the more enhanced risks that AI may pose. Where businesses do not have these controls in place, they should engage with their AI technology suppliers to discuss what controls they can offer as part of their solution. This could include staggering the level of human control or input, or a guardrail system which would switch off the system in the event the AI begins to produce incorrect outputs.

Another risk to consider from the outset is that of integration and implementation. AI providers will need to understand the company's IT estate, processes and data sets prior to rolling out a proof-of-concept model. The implementation of AI can often be protracted, when the parties have not fully discussed issues such as legacy systems and the types of data that are to be analysed, for example whether only structured or also unstructured. Where conversations occur at an early stage, realistic expectations and milestones can be agreed in principle and thereafter translated into the contract.

Consider:

  • What internal systems are in place to monitor the AI solution?
  • Have you discussed what monitoring tools the AI provider has available?
  • Are both parties clear on timescales for implementation?

Transparency

Businesses should be clear on the data sets that have been selected and used for the AI to train on, be tested and deployed. If the system has been trained on inaccurate data sets and has not completed the relevant training, the level of errors in outputs is likely to be much higher.

It is important for the business to engage with the AI service provider and to understand the services being provided. In particular, internal stakeholders should understand the underlying decision-making and evaluation process used by the AI tool and where decision-making cannot be traced or explained, businesses should ensure that they have in place processes to deal with such circumstances.

Businesses should also seek to discuss with AI providers how decisions and processes are logged and available for review. From a regulatory perspective, the management board will remain accountable to regulators. The board therefore needs to be able to understand and explain to the extent possible, the rationale behind decisions taken by AI systems, and where it cannot, consider whether it is comfortable with using "black box" AI in particular areas of its business, such as in customer facing environments, or across the business at all.

Given the vast data that is used by AI systems, developing a transparent process can be challenging. However, businesses should consider documenting the data types being processed, where they are stored, which algorithms are being used by the AI system, the parameters set and where the decisions are stored. At a high level this will assist in tracing erroneous data, should this be required, and may help the business to explain decisions made by the AI to regulators.

Businesses may wish to consider whether their existing processes help satisfy regulatory and ethical requirements of explainability or if additional measures require to be put in place.

Consider:

  • What data sets are being provided?
  • Can you access logs of the AI decision making process?
  • How does the algorithm work, and whether the decision making models can be tested?If so, can the testing be documented?

Governance

At a business level, it is important that the appropriate individuals and teams are in place to engage with the technology, oversee the outsourcing and work with the AI supplier on day-to-day issues.

Depending on the nature of the data being provided, the business might consider designating an individual or group to report directly into the management board, and who would be responsible for the oversight of the AI outsourcing. This individual or group might also be tasked with updating risk management frameworks to track the risks that AI may pose and regularly reviewing the steps taken with the supplier to minimise these risks. This approach will assist in building an effective AI risk management framework and promote a culture of transparency and ethical use of AI across the business.

Where a firm is regulated by the FCA, consideration should be given to obligations arising out of the senior managers and certification regime (SMCR).  The FCA has advised that accountable individuals under the SMCR should ensure they are able to explain and justify the use of AI systems. In particular, board members and senior management functions will have to evidence the ‘explainability’ of their firm’s AI, that is, the decisions made by AI. This requires the board members and senior managers to know where AI is involved across each business unit, where used.

Skills

Businesses should consider whether they have the appropriate expertise in-house to engage effectively with AI service providers. The transition of services onto an AI platform may create an internal skills gap, which will only widen where individuals do not understand what is happening to the data. The risk of a skills gap to a business is that it could lose control of its data, and it would not have the resources available to monitor, review or explain the decisions made by AI.

Therefore it is critical to review personnel skillsets and ensure that an appropriate team of individuals with the correct skills and level of experience – i.e. data scientists or developers –are available to review, monitor and manage the relationship with AI providers. This will ensure that throughout the lifecycle of the AI service, individuals within the organisation can ensure that other areas of concern such as data protection and security are sufficiently managed.

In addition to this, internal training sessions on the use of AI systems would be highly beneficial. This would help ensure relevant employees and managers across the business, ranging from those engaging with the AI daily up to board level, have an appropriate level of training and experience with AI.

Consider:

  • Are the correct teams and individuals identified within your organisation?
  • Do your personnel have the right skills and experience to engage with AI services?
  • Have you considered internal training sessions across the business?
  • Can the senior managers under the SMCR within your organisation explain how the technology works?

Security

Given the complexity of AI systems, hackers may try to use this as an opportunity to access company data and insert 'bad data' into the system. This is known as data poisoning and is an attempt to influence the decision making output of the AI system to their benefit and/or your company's detriment.

Before contracting with an AI provider, it would be very useful to review and understand the following:

  • the AI provider's security policies,
  • whether the business's own security and disaster recover policies are sufficient to account for AI use, or whether new or amended policies are required,
  • business continuity policies,
  • their track record and information about recent unplanned downtime or unavailability,
  • which IT safety standards the provider has been certified against,
  • the availability of penetration test reports,
  • whether or not your data will be encrypted

To the extent that a company is not willing to provide information about their security systems and policies, or does not make information available about their penetration testing, for many important service arrangements this should raise a red flag. In any event, this information should be reviewed by IT teams to ensure that the proposed service provider does not have any gaps in their security systems.

Consider:

  • Have you reviewed the security policies?
  • What does the AI provider's track record look like?
  • What security standards does the provider's systems adhere to?

Regulatory

Financial services businesses in the UK are subject to regulatory oversight from the Financial Conduct Authority (FCA), Prudential Regulation Authority (PRA) and Bank of England. The regulatory regime also currently includes rules and guidance provided by the EU financial services authorities, the European Banking Authority (EBA), the European Insurance and Occupational Pensions Authority and the European Securities and Markets Authority. Expectations set by the Information Commissioner's Office, the UK's data protection authority, are also highly relevant to the use of AI.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

Banks need both business continuity plans and exit strategies in place, and they must be able to evidence that they have the skills and resources to adequately monitor the activities which have been outsourced to third party providers

Businesses need to review the relationships with the proposed AI provider and consider how any working arrangement would make them non-compliant with regulations. By way of example, banks have prescribed contract clauses that need to be reflected in their outsourcing contracts and are required to have conducted effective due diligence before outsourcing a function. It is also worth considering firms responsibilities arising from the SMCR.

Banks, for example, will need to ensure that where they are outsourcing a function, they remain accountable and have oversight of the relationship. To achieve this, and comply with EBA guidelines, banks need both business continuity plans and exit strategies in place, and they must be able to evidence that they have the skills and resources to adequately monitor the activities which have been outsourced to third party providers.

Given the FCAs focus on the development of culture within firms coupled with the decision making impact of AI, it would also be useful to consider how the businesses' values may be impacted. Senior managers will need to be comfortable that the ethics underpinning the AI do not conflict with regulatory rules and guidance.

Consider:

  • How will the provision of AI services impact your regulatory requirements?
  • How will the firm's culture be impacted by the use of AI?
  • Who within the organisation will be accountable for oversight of the AI services?

Data protection

The collection and use of personal data in the UK, EU and across the world is regulated. Non-compliance with data protection laws runs the risk of regulatory fines, scrutiny and reputational damage.

To the extent that the AI technology is processing personal or confidential information, businesses should ensure that all processing complies with data protection laws including ensuring that the service provider has the appropriate technical and organisational measures in place to protect the data. A good starting point would be to analyse the data flows, as this will assist in determining compliance with the General Data Protection Regulation (GDPR).

Due to the autonomous nature of some AI technology, there is a risk that both the institution and service provider will be considered to be joint controllers of personal data for the purposes of data protection law. In those cases the service provider would be subject to a greater volume of legal obligations than they would be if they were only classed as a processor of the data. This risk of joint controller status is enhanced where the institution and service provider are both dependent on each other's use of the data to meet their own obligations under an arrangement.

It is important that institutions assess these issues as part of a data protection impact assessment (DPIA). DPIAs are mandatory for all organisations seeking to use AI, under the GDPR.

AI providers often transfer data across borders and sub-contract the processing of data. Businesses should clearly understand from the outset who will be processing their data and whether adequate safeguards are in place to protect such transfers, and how so. This will avoid any form of opaqueness in the contract chain, which could prevent the business from complying with their regulatory obligations.

In accordance with EBA guidelines, where there is a transfer of personal or confidential data, businesses should adopt a risk-based approach to data storage and data processing locations. To this end it would be advisable for customers to review service providers' data sharing agreements and conduct an impact assessment where required.

Consider:

  • Have you reviewed the data protection and privacy policies?
  • Who will be the sub-processors handling your data?
  • Have you considered preparing a data protection impact assessment?

Ethics

There is a risk that decisions made by the AI tool may not be ethical in that they could discriminate against a particular demographic group, either directly or indirectly. This could occur where the AI is trained on data that is skewed and not representative of the population it is to apply to. By way of example, where the data lake has been trained predominantly on data provided by male profiles, there may be a risk that the tool will discriminate against women.

The business should look to engage with the AI service provider and understand how they train their AI, in particular how the analytical tools they employ ensure that the data lake does not contain biases and is accurate. This is an evolving field in AI and businesses should look to continue engagement with AI service providers throughout the life of the outsourcing to minimise this risk.

Consider:

  • What policies does the AI provider have in place to tackle bias?
  • Does the AI provider have any anti-bias or discrimination controls in place?

Exit management

One of the core risks in software outsourcing contracts is that of "lock-in". This is effectively where a business cannot transfer its data from one service provider to another because it is only available in a proprietary format or there are other factors which are tying the customer to the service provider.

In the context of AI where there are vast quantities of data being provided, it may prove difficult to segregate and provide data in a meaningful and purposeful way. To this end it would be useful to have these conversations with providers from the outset of engagement. Banks are required to ensure that their outsourcing arrangements can be ported from one provider to another with minimal service disruption to their clients.

Where possible, businesses should consider what replacement solutions are available and whether they would be able to transition to their platform. This exercise will involve preparing draft transition plans.

A business will have rights to its data but perhaps not all of the underlying intellectual property (IP) within the data lake. This means they should consider what licences would be required from the service provider to effect a smooth transition of services to another service provider.

Consider:

  • What plans are in place to exit from the service?
  • Upon exit, in what format will data be received?
  • Are there any other replacement providers you could move the service across to?

Conclusion

Careful consideration of these issues from the outsetwill provide businesses with the appropriate tools to engage with AI businesses more effectively and will ensure that risks, concerns and queries raised in dialogue can be addressed contractually. These considerations should also grant businesses the opportunity to take an in-depth look at potential AI partners and will assist businesses select appropriate tech providers to partners with in future.

Hussein Valimahomed and Luke Scanlon are experts in fintech regulation at Pinsent Masons, the law firm behind Out-Law.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.