Out-Law News 3 min. read

No bespoke rules for AI, FCA confirms

London financial district Nov 2024_Digital - SEOSocialEditorial image

London’s financial district. AaronP/Bauer-Griffin/GC Images/Getty Images.


Financial services firms regulated in the UK will not face bespoke new rules to govern their use of AI, the sector’s conduct authority has confirmed.

In a new ‘AI update’ (26-page / 399KB PDF) it has published, the Financial Conduct Authority (FCA) said it is a “technology-agnostic, principles-based and outcomes-focused regulator”. It signposted a suite of existing rules and guidance that it said apply to firms’ AI use.

“The FCA’s commitment to technological neutrality in the case of AI regulation is in line with its previous approach to innovations impacting the financial services sector. However, this is far from a benign announcement from the FCA. By signalling that firms must manage AI risks within existing frameworks rather than expect bespoke rules, the FCA is reminding firms that they should already be proactively considering the impact of AI on the effectiveness of their controls framework,” said Jonathan Cavill of Pinsent Masons, an expert in financial services regulation.

The UK’s approach to regulating AI continues to evolve. In 2024, the previous Conservative government confirmed that the UK would take a different approach to regulating AI than policymakers in the EU, where a bespoke tiered system of regulating AI technology and its use by organisations has been established under the AI Act.

Instead, in an approach that continues under the current Labour government, the UK has chosen to address AI-related risks through a mainly sectoral approach to regulation, where existing law and regulatory requirements are simply applied in the context of AI rather than new rules written specifically to govern AI use. Frameworks such as UK rules on data protection, consumer protection and product safety, as well as the equality law regime, are therefore relevant to AI use, as are sectoral regulatory frameworks applicable to, for example, financial services firms and medical device manufacturers.

While there is no overarching framework that governs use of AI in the UK, under the sector-based approach to regulation, regulators must fulfil their regulatory functions as they relate to AI with due regard to five cross-sector principles set out by the government – safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

UK regulators must also have regard to their statutory objectives when performing their supervisory functions. In this regard, the FCA is among the group of regulators now subject to a duty to facilitate the international competitiveness of the UK economy and its growth in the medium- to long-term.

In its new paper, the FCA acknowledged the “important role” it has “in the continued success and competitiveness of the UK financial services markets and their contribution to the UK economy” and said that this “extends to the role of technology, including AI, in UK financial markets”.

Sébastien Ferrière

Pinsent Masons

Firms deploying AI should expect heightened scrutiny around governance, explainability, and accountability – particularly where AI systems materially impact consumer outcomes or decision-making

The FCA’s update outlined how it sees its existing regulatory framework mapping across to each of the five cross-sector regulatory principles.

Principles and requirements set out in the FCA’s Principles for Business, as well as in its Handbook and Senior Management Arrangements, Systems and Controls (SYSC) sourcebook, were among those cited as relevant to firms’ use of AI.

Other rules referenced include the consumer duty regime, requirements focused on ensuring operational resilience and fair treatment of vulnerable customers, as well as the Senior Managers and Certification Regime.

In relation to accountability and governance, the FCA said, among other things, that the senior management function of firms subject to the enhanced SMCR regime will be responsible for “any use of AI in relation to an activity, business area, or management function of a firm”.

The regulator added that firms’ reporting requirements in line with the consumer duty could also encompass “consideration of current or future use of AI technologies where it might impact retail consumer outcomes or assist in monitoring and evaluating those outcomes”.

“Firms deploying AI should expect heightened scrutiny around governance, explainability, and accountability – particularly where AI systems materially impact consumer outcomes or decision-making,” said Sébastien Ferrière of Pinsent Masons.

Cavill Jonathan

Jonathan Cavill

Partner

[The FCA’s] adoption of AI for supervisory functions such as scam detection and monitoring of financial crime reflects a broader shift toward data-driven regulation

The FCA’s AI update also addressed the regulator’s own use of AI. It said it is “continuing to improve” how it uses data and technology and that this is helping it become “a more innovative, assertive and adaptive regulator”.

Among other things, the FCA said machine learning has already helped it combat online scams and that it is “particularly interested in how AI can help identify more complex types of market abuse that are currently difficult to detect”.

Cavill said: “The FCA is not only regulating AI – it is using it. Its adoption of AI for supervisory functions such as scam detection and monitoring of financial crime reflects a broader shift toward data-driven regulation.”

The FCA is in the process of establishing a system that will support firms with live testing of their AI models. The deadline for applications to participate in the first wave of this initiative closed last week. This system is one of several components of the FCA’s ‘AI Lab’ which is intended to provide a pathway for the FCA, firms and wider stakeholders to engage in AI-related innovations.

“We want to create an environment where new technology propositions can be tested safely and responsibly, with access to a suite of tools to collaborate and develop proof of concepts, including providing access to high-quality synthetic data,” said Jessica Rusu, the FCA’s chief data, information and intelligence officer.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.