Out-Law / Your Daily Need-To-Know

Out-Law Analysis 7 min. read

What the UK’s six AI principles mean for financial services


Financial services firms can take steps now to prepare for the planned introduction of a new system of regulation for artificial intelligence (AI) systems in the UK.

The regulatory framework is to be built around six cross-sector principles that the UK government outlined in July and focus on ‘high risk’ AI – in an approach to regulating AI that varies significantly to that proposed in the EU with the planned AI Act.

Formal regulatory guidance is likely to be issued in the UK to guide firms’ approach to using AI in due course, but we have assessed each of the six principles against the views expressed by regulators via the AI Public Private Forum (AIPPF) – a forum set up by the Bank of England and the Financial Conduct Authority (FCA) – in an effort to understand what is likely to be expected of firms.

Undertaking an AI audit and assessing how AI use aligns with business objectives are just two of the measures that firms should consider to best prepare themselves for the new regulatory regime.

Ensuring AI is used safely

A common theme across both the EU and UK proposals is ensuring the safe use of AI.

Detailed UK guidance on risk categorisation is limited at this stage, but financial services firms would be wise to undertake an audit of their AI use and create an inventory of all AI applications that are in use or development. Firms should also proactively consider any uses of AI which could be deemed ‘high risk’ by regulators – the draft EU AI Act, which prescribes four risk categories, is a helpful starting point.

For example, if firms use AI as part of the decision-making process in a customer’s loan application to credit score customers or make decisions about their eligibility for a loan, this would likely be considered high risk. In that scenario, firms may be expected to complete a detailed conformity assessment to validate continued use of AI in this way. There are potential similarities in this regard to the use of data protection impact assessments that firms may already be familiar with under data protection law, though the scope of a conformity assessment would extend beyond personal data and require focus on high risk AI systems and the source of the data.

Ensuring AI is technically secure and functions as designed

In its policy paper, the Department for Digital, Culture, Media and Sport (DCMS) said: “AI systems should be technically secure and under conditions of normal use they should reliably do what they intend and claim to do.” This is likely to result in new AI-specific ongoing management, record-keeping and audit requirements for financial services firms.

To ensure technical security, firms should ensure that there are processes in place for tracking and measuring data flows within, as well as into and out of, their organisation. As noted in the AIPPF report, further guidance for financial services firms in the future may also include requirements for new AI industry certifications. These could, for example, resemble ISO and the International Electrotechnical Commission (IEC) standards.

Once an AI system has been deployed, it should be reviewed regularly to assess whether it is continuing to operate as intended and that it is not causing external harm. AI models can be programmed to continuously learn from data. This puts models at risk of ‘concept drift’, where models evolve outside of their intended use and in doing so, they become less stable. It is important to put measures in place to manage these risks, especially in situations such as those described by the UK’s financial regulators in their October 2022 discussion paper on AI and machine learning, whereby AI systems used to model the probability of defaults may ‘drift’ and in turn cause firms to have incorrect levels of regulatory capital.

Firms should also assess how AI use is aligning with their business objectives. Through robust ongoing management processes, firms will have a clear picture of the value of the data they possess. This will enable them to undertake a cost/benefit analysis for projects involving AI and demonstrate that the AI systems they use are achieving their intended purpose.

Looking into the future, it is important that firms consider whether AI is likely to become an integral part of achieving these business objectives. A survey published alongside the October 2022 discussion paper noted that the existence of legacy systems proved to be the biggest constraint to deploying machine learning applications. Firms must continue to take steps to update these legacy systems to ensure AI can be implemented effectively.

Make sure that AI is appropriately transparent and explainable

The UK’s pro-innovation approach is focused on regulating the highest risk uses of AI. A challenge of AI systems is that they cannot always be properly explained in an intelligible way. While this is not always a substantial risk, DCMS said that “in some settings the public, consumers and businesses may expect and benefit from transparency requirements that improve understanding of AI decision-making”. This bears similarities to the draft EU AI Act’s transparency requirements for AI use categorised as of “limited risk”.  As a minimum, financial services firms may be expected to explicitly notify customers where they are interacting with an AI system, whether directly – for example, in an AI customer service chatbot – or as part of another service that is being provided, such as where AI is being used to evaluate a loan application or detect fraudulent activity.

The AIPPF report provides some other helpful indications on potential future transparency requirements for firms. Customers may also need to be informed of the nature and purpose of the AI in question including information relating to any specific outcome; the data being used and information relating to training data; the logic and process used and, where relevant, information to support explainability of decision making and outcomes; and accountability for the AI and any specific outcomes.

In practice, financial services firms may be able to achieve this in documentation akin to a privacy policy. To address DCMS’ comments, financial services firms could consider implementing a formal AI explainabilty appraisal process for internal, regulatory, and consumer use.

Embed considerations of fairness into AI

As the AIPPF report highlighted, “AI begins with data” and many of the risks and benefits in AI systems can be traced back to underlying data that feeds them. In the context of personal data and AI, the Information Commissioner’s Office (ICO) has already provided substantive guidance on various models of fairness, which may act as a useful indicator for the direction of further regulation. Compliance with the Equality Act 2010, which protects discrimination on the basis of nine protected characteristics, is also likely to feed into the interpretation of fairness. However, for now, DCMS has left the parameters of “fairness” to be defined by regulators, noting that it is context specific.

As AI systems have the potential to exacerbate issues of fairness in underlying data which is of poor quality, particularly in unstructured datasets, future regulatory guidance may include requirements around data validation. As best practice the AIPPF report recommends that firms clearly document methods and processes for identifying and managing bias in inputs and outputs.

Financial services firms are already familiar with the concept of model risk management. This may be extended to include an assessment of the harm and risk caused to consumers where AI is used – for example, if the outcome is that consumers are denied access to credit. It could also be extended to include an assessment of how to mitigate the risk that AI systems have on wider financial markets.

As the AIPPF put it, “the benefits AI brings should be commensurate with the complexity of the system”. Firms should be able to justify why they are using AI instead of a more comprehendible process which produces a similar output.

Define legal persons responsible for AI governance

DCMS has said that “accountability for the outcomes produced by AI and legal liability must always rest with an identified or identifiable legal person – whether corporate or natural”. Many firms are already subject to the Senior Managers and Certification Regime (SMCR). In the context of AI governance, it remains to be seen whether this will be updated or replaced with a new approach.

In accordance with FCA’s guidance, firms should take a centralised approach to setting governance standards for AI. This could mean that primary responsibility for compliance sits with one or more senior managers, with business areas being accountable for the outputs, compliance, and execution against the governance standards. The centralised body should have a complete view of all AI models and projects to enable it to set the standards or policy for managing AI models and associated risks.

This approach is likely to be more effective than allowing for different approaches within a firm. It would also lead to a greater standard of education, training, and relevant information on the benefits and risks of using AI throughout the firm – given the rapidly changing nature of developments in AI and technology, this is important. 

As noted by the AIPPF, existing governance frameworks and structures provide a good starting point for AI models and systems, partly because AI models will invariably interact with other risk and governance processes. As a guiding common principle, firms should always have clear lines of accountability and responsibility for the use of AI at the senior managers and board levels.

The rapid development of AI means that there may also be a skills gap at senior management level. A lack of knowledge on how AI works may mean that although suitable governance structures may be put in place, senior managers may lack the knowledge of AI to provide effective governance. This may make it difficult to meet the expectations set out in the Prudential Regulation Authority’s supervisory statement regarding board responsibilities in respect of corporate governance which requires “diversity of experience” at board level. Firms may address any potential skills gaps by including AI-related questions in interviews for senior management roles.

Clarify routes to redress responsibility

There is no substantive detail on this yet, but DCMS said that the use of AI should not remove an affected individual or group’s ability to contest an outcome. This is likely to result in an extension of the existing recourse rights available to regulators for breaches in relation to AI. This may include outright bans on high risk AI systems which do not meet regulatory requirements and financial penalties for non-compliant firms.

Co-written by Krish Khanna and Kayode Ogunade of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.