Out-Law / Your Daily Need-To-Know

Out-Law News 4 min. read

How UK financial services regulation applies to AI to be explained to businesses


Businesses can expect to get fresh guidance on how UK financial services regulation applies to the use of artificial intelligence (AI) systems later this year.

The Bank of England (BoE) said it would “aim to provide clarity around the current regulatory framework and how it applies to AI” in a new discussion paper it intends to publish.

Publication of the discussion paper will also give businesses an opportunity to have their say on how future policy relating to AI in financial services is shaped, the BoE said following publication of the final new report of the AI Public Private Forum (AIPPF) – a forum compromising representatives from business, academic, government and regulators.

“One thing the AIPPF has made clear to us is that the private sector wants regulators to have a role in supporting the safe adoption of AI in UK financial services, building on what is already in place,” the BoE said. “Different types and sizes of firms will have different views and needs, and any regulatory interventions should be proportionate. Much work is already underway as regulators, domestically and internationally, consider issues relevant to their respective remits.”

“One of the key questions is how existing regulation and legislation, such as the Senior Managers and Certification Regime, may be applied to AI and whether AI can be managed through extensions of the existing regulatory framework, or whether a new approach is needed. To help address this question and support further discussion about what an appropriate role for regulators might look like, we will publish a discussion paper on AI later this year. It will build on the work of the AIPPF and broaden our engagement to a wider set of stakeholders,” it said.

“Discussion papers are used to stimulate debate on issues about which we are considering making rules or setting out expectations. The discussion paper will aim to provide clarity around the current regulatory framework and how it applies to AI, ask questions about how policy can best support further safe AI adoption, and give stakeholders an opportunity to share their views. The responses to the discussion paper will help us to identify what is most relevant to our remits and what is not, as well as help formulate any potential policy,” the BoE.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

What is particularly helpful is the recognition that model risk management should be viewed as a primary framework for managing AI risk

The AIPPF was established by the BoE and Financial Conduct Authority (FCA) in 2020 and tasked with advancing understanding on how AI is used in financial services and promoting debate about it best to support safe adoption of the technology.

In its report, the AIPPF identified benefits it said the use of AI use can bring to consumers and businesses. Those benefits include enabling greater personalisation of financial products and services and a more seamless customer journey and experience, speeding up business processes, supporting automation of both front- and back-office functions, and providing for effective decision making. Other benefits to the wider economy from using AI include enabling sense to be made of “the scale and complexity of the financial system” and in tackling fraud, it said.

The AIPPF also highlighted risks that arise from using AI, including the risk of baking in bias that may be inherent in datasets, as well as the potential for there to be a lack of transparency and explainability over how AI outputs are reached and of accountability and responsibility for decisions based on those outputs. It said that “inappropriate use of AI” can lead to financial loss as well as reputation, regulatory, operational and cyber risk for businesses, as well as potential loss of intellectual property. Despite this, the AIPPF said the AI models used in UK financial services are becoming increasingly sophisticated.

The AIPPF outlined a series of good practice recommendations in its report to guide financial services firms’ use of AI and address issues arising from using data, in how AI systems are modelled, and in relation to governance.

The forum said firms should take a holistic view when considering whether to adopt AI systems and seek to “coordinate data management and strategy with AI management and strategy”. It recommended firms put processes in place for tracking and measuring data flows, carry out regular data audits, and assess the value of the data they hold to “inform the cost/benefit analysis of the AI project”.

The AIPPF said firms “should be able to demonstrate full understanding of why they are using an AI application compared to something that is simpler and easier to understand that produces similar outputs”, and further recommended that use of AI should be subject to a documented sign-off policy and that risks be clearly explained, along with associated mitigation measures taken.

Firms were also advised to “strengthen” the contact that their data science and risk teams have as AI models are being developed, provide training and understanding of AI with a view to building skills on AI within their organisation, and ensure they have “AI-specific elements” to their risk and privacy frameworks, as well as operational principles and “a set of ethical principles which can help guide decision-making”.

Financial services and technology law expert Luke Scanlon of Pinsent Masons said: “The approach which the AIPPF has taken in focussing on three areas – data, model risk and governance – is a practical one and provides a good discussion on the thinking of current best practice. What is particularly helpful is the recognition that model risk management should be viewed as a primary framework for managing AI risk – this stands in contrast to the proposed EU AI Regulation, for example, which does not take any steps towards referencing its central importance.

The AIPPF said regulators should provide greater clarity on existing regulation and policy but said the guidance they issue “should not be overly prescriptive and should provide illustrative case studies”. It also called on regulators to “identify the most important and/or high-risk AI use-cases in financial services with the aim of developing mitigation strategies and/or policy initiatives”.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.