Looking into the future, it is important that firms consider whether AI is likely to become an integral part of achieving these business objectives. A survey published alongside the October 2022 discussion paper noted that the existence of legacy systems proved to be the biggest constraint to deploying machine learning applications. Firms must continue to take steps to update these legacy systems to ensure AI can be implemented effectively.
Make sure that AI is appropriately transparent and explainable
The UK’s pro-innovation approach is focused on regulating the highest risk uses of AI. A challenge of AI systems is that they cannot always be properly explained in an intelligible way. While this is not always a substantial risk, DCMS said that “in some settings the public, consumers and businesses may expect and benefit from transparency requirements that improve understanding of AI decision-making”. This bears similarities to the draft EU AI Act’s transparency requirements for AI use categorised as of “limited risk”. As a minimum, financial services firms may be expected to explicitly notify customers where they are interacting with an AI system, whether directly – for example, in an AI customer service chatbot – or as part of another service that is being provided, such as where AI is being used to evaluate a loan application or detect fraudulent activity.
The AIPPF report provides some other helpful indications on potential future transparency requirements for firms. Customers may also need to be informed of the nature and purpose of the AI in question including information relating to any specific outcome; the data being used and information relating to training data; the logic and process used and, where relevant, information to support explainability of decision making and outcomes; and accountability for the AI and any specific outcomes.
In practice, financial services firms may be able to achieve this in documentation akin to a privacy policy. To address DCMS’ comments, financial services firms could consider implementing a formal AI explainabilty appraisal process for internal, regulatory, and consumer use.
Embed considerations of fairness into AI
As the AIPPF report highlighted, “AI begins with data” and many of the risks and benefits in AI systems can be traced back to underlying data that feeds them. In the context of personal data and AI, the Information Commissioner’s Office (ICO) has already provided substantive guidance on various models of fairness, which may act as a useful indicator for the direction of further regulation. Compliance with the Equality Act 2010, which protects discrimination on the basis of nine protected characteristics, is also likely to feed into the interpretation of fairness. However, for now, DCMS has left the parameters of “fairness” to be defined by regulators, noting that it is context specific.
As AI systems have the potential to exacerbate issues of fairness in underlying data which is of poor quality, particularly in unstructured datasets, future regulatory guidance may include requirements around data validation. As best practice the AIPPF report recommends that firms clearly document methods and processes for identifying and managing bias in inputs and outputs.
Financial services firms are already familiar with the concept of model risk management. This may be extended to include an assessment of the harm and risk caused to consumers where AI is used – for example, if the outcome is that consumers are denied access to credit. It could also be extended to include an assessment of how to mitigate the risk that AI systems have on wider financial markets.
As the AIPPF put it, “the benefits AI brings should be commensurate with the complexity of the system”. Firms should be able to justify why they are using AI instead of a more comprehendible process which produces a similar output.
Define legal persons responsible for AI governance
DCMS has said that “accountability for the outcomes produced by AI and legal liability must always rest with an identified or identifiable legal person – whether corporate or natural”. Many firms are already subject to the Senior Managers and Certification Regime (SMCR). In the context of AI governance, it remains to be seen whether this will be updated or replaced with a new approach.