Out-Law Analysis 4 min. read

Implementing model risk management for AI in financial services


Businesses seeking to use artificial intelligence (AI) tools in UK financial services have been advised to embrace ‘model risk management’ as a means by which to address risk inherent in the technology.

Financial institutions may be able to draw on their experience in assessing, planning for and managing other risks, but should also anticipate fresh challenges to arise, when implementing model risk management for AI.

What is model risk?

Models are commonly used within financial services as the basis for decisions and the monitoring of performance and operations.

Financial services businesses are often highly reliant on the use of models and the management of model risk to assist with predicting trends, preventing loss and increasing operational efficiency. The sector employs the use of modelling techniques and processes to identify, measure, manage and mitigate potential risks which may arise for the business, investors, customers and the financial system as a whole.

In a recent report, the AI Public Private Forum (AIPPF) cited the definition of “model risk” as “the potential loss an institution may incur, as a consequence of decisions that could be principally based on the output of internal models, due to errors in the development, implementation or use of such models”. Model risk can also arise as a result of insufficient data and lack of appropriate and required expertise.

The AIPPF said a model risk management is “a primary framework for managing AI-related risks in financial services”.

Model risk and AI

With the uptake in AI use increasing across the financial services sector, lessons learnt from challenges with existing model risk management processes in financial services can be used to help establish processes specific to AI. In establishing model risk management for AI, the AIPPF identify complexity and explainability as core issues and challenges with managing risk arising from AI models.

Model risk exists at all stages of the AI model lifecycle, from model design and build to model validation, to model deployment through to monitoring and reporting. Financial services businesses should ensure that their model risk management processes factor in each of the various stages of the cycle.

Other non-financial risks related to AI such as data protection and privacy and cybersecurity also need to be addressed. So does “transfer learning” – the use of knowledge gained for one purpose being used to solve problems for another, different but related, purpose. 

Complexity

As AI use increases, AI models are often becoming more complex. The complexity of the AI models may differ depending on the size of the financial services business and the scope of the services being provided.

The structure of an AI model and its complexity can also have an impact on other considerations such as the ability to reproduce models. It is often critical to be able to record model development so that it can be repeated with identical results.

Assessing model development can become more challenging where it is difficult to understand the model and its structure or operation. Reproducibility is therefore important in assisting with the requirements and principles of transparency, explainability and auditability.

Repeatable outputs may also be useful where customers query financial services businesses about individual decisions or ask for decisions to be retaken. They may also be useful where customers exercise their rights in relation to personal data.

As the AIPPF report highlights, the scale of AI and data being used by financial services can create record keeping challenges, as it is unclear exactly what information should be recorded - whether test data, training data and/or source code, for example – and how long this data should be retained. Financial services businesses will also need to consider the cost involved in the retention of such data.

Explainability

Having the ability to explain how an AI system works and how an outcome was achieved is central to the use of AI and building customer trust. In recent years, there have been calls for the need to clarify how much information is required to explain how an outcome was achieved, and when an organisation must be able to do this.

This is a particular concern where “black box” AI is in use. Black box AI includes systems which users find challenging to understand due to the use of neural networks and autonomous learning.

The level of explainability required will depend on the context and the stakeholders to which an explanation is being provided. Different levels of transparency and access to information will be required depending on the perspective of the person with whom the information is shared.

Auditors, regulators, customers and other end users should not be treated the same. Financial services businesses, as customers of AI systems, may require higher levels of transparency and volumes of information to achieve regulatory compliance than other customers of AI systems.

When determining the appropriate level of explainability, the AIPPF drew attention to focussing on the customer experience. Explainability “becomes part of a much broader requirement on firms to communicate decisions in meaningful and actionable ways” and that “the focus is not just on model features and important parameters, but also on consumer engagement, and, clear communications”, according to the report.

Model risk in practice

Financial services businesses that wish to adopt model risk management processes for AI use can take various steps to ensure any processes implemented are appropriate and suitable for AI models. Where AI is being used, documenting the process, inputs and outputs, and risks and mitigation strategies implemented, will be important and need to be mapped to robust controls.

Financial services businesses should document and agree AI review and sign-off processes for all AI systems, with such processes being agreed prior to the implementation of the system. Inventories and logs of all AI systems in use and development, and methods and processes for identifying and managing bias are also important, as well as carrying out frequent reviews of the systems’ performance and impact on the business and externally. The AIPPF highlights that the management of both AI inputs and outputs will be useful.

In addition to documenting the AI lifecycle, ensuring sufficient processes are in place to back-up data – whether that is training data, live data or outputs – is critical. This includes having in place plans to rectify issues using back-ups.

The AIPPFt also calls for tackling the familiar model risk challenges that arise also in the back-up context such as explainability and reproducibility. For example, there is a need to ensure data can be tracked back to show how decisions were made and how the system operates.

Back-ups will also be useful in helping financial services businesses ensure they can meet regulatory requirements in respect of auditing systems, access to data, and minimising business disruption for customers should any identified model risks materialise.

Financial services businesses should therefore consider the extent to which existing model risk management processes can be used in relation to AI use and whether any amendments or separate AI specific processes will be required.

Co-written by Priya Jhakra of Pinsent Masons.

Rewiring financial services
Digital transformation is accelerating in the financial services sector, particularly in the wake of the global pandemic. We investigate the legal and regulatory landscape in financial services technology and highlight the opportunities for change.
Rewiring financial services
We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.