Out-Law Analysis

IMF’s view of generative AI risks provides learning points for financial firms


The International Monetary Fund’s (IMF’s) recent call on financial regulators to "strengthen their institutional capacity and intensify their monitoring and surveillance of the evolution" of generative AI is consistent with the increasing scrutiny being given to the technology.

Although the law, regulation and guidance governing the use of AI in the financial services sector continues to evolve, some regulators have highlighted that existing legal and regulatory frameworks already address many of the risks associated with developing and using generative AI and that they expect regulated businesses to have effective processes and controls in place to respond.

For the IMF, embedded bias, hallucinations, use of synthetic data, explainability, data privacy and cybersecurity are among the main risks arising in the context of generative AI – and what it has had to say can help inform the actions businesses will need to take when developing the technology or implementing it in financial services.

Embedded bias

The potential for historical biases to be replicated in training datasets or through algorithm design decisions has been well explored, as has the potential for automation bias to occur when users over-rely on AI outputs without questioning them. While the IMF highlights these issues, it also explores further bias-related concerns that have received less attention elsewhere.

One risk it highlights relates to the potential for bias to be embedded in the prompts users enter into generative AI applications and for that inputted data to influence generated outputs. Unlike other some AI systems, such as some of those used for detecting fraud and which may be limited to making predictions, generative AI will use the information in the prompts of users to create relevant answers based on probability.

This distinction highlights that a one-size-fits-all approach cannot be taken to AI risk assessments. A generative AI risk assessment should identify the steps that have been taken to address not only bias in datasets and algorithmic design, but also in how the system responds to biased input from users – an issue which may not be a relevant one in other AI contexts.

The IMF also highlights the risk of search engine optimisation techniques being deployed to influence generated outputs. “SEO tools will very likely be geared toward influencing the training of GenAI models – possibly skewing the models output and introducing new layers of biased data that could be difficult to detect”, it said. This is an issue which regulated financial businesses will need to address in some contexts.

Hallucinations

Hallucinations – a term that describes when AI tools produce false outputs that look as if they might be true – present a risk that needs to be carefully managed. As the IMF puts it, generative AI has the capacity to “produce wrong but plausible-sounding answers or output and then defend those responses confidently”.

Whether generative AI is used to generate risk assessments, profile customer segments, obtain market insights, or in other ways, a series of hallucinations could have a significant adverse impact on the financial safety of a financial institution or decrease the level of consumer protection it offers to its clients. Demonstrating a robust governance and control structure for managing the risk of hallucinations should be a priority for all regulated financial businesses.

At the technical level, that may involve understanding how best to reduce the likelihood of outputs which include hallucinations. At an organisational level, it is important that policies and accountability structures are put in place to ensure that staff are aware of the risk of hallucinations and that there are second- and third-line defence mechanisms in place to mitigate overreliance on generative AI outputs.

Use of synthetic data

The use of synthetic data has been highlighted in favourable terms by regulatory bodies, including in the UK, by the Information Commissioner’s Office and the Financial Conduct Authority. As a privacy-enhancing technique, its use in the training and testing of large language models (LLMs) can be significant factor in reducing privacy and confidentiality risks.

The IMF, however, highlights that the use of synthetic data is not risk-free and that data quality issues need to be addressed. In particular, consideration should be given to the extent to which synthetic datasets may reproduce real-world biases, gaps in datasets, or inaccuracies at scale.  

Regulated financial businesses should ensure that AI risk assessments evaluate potential risks around the use of synthetic data and set out how those risks are to be controlled. Synthetic data standard-setting initiatives are underway. These may further develop the approaches an organisation can take to validating synthetic data generators and the techniques used for post-generation validation.

Explainability

The IMF acknowledges that explainability is a “complex and multi-faceted issue”. However, this does not diminish the responsibility regulated financial businesses have to ensure that a sufficient level of transparency and explainability is achieved, nor their responsibility to ensure that appropriate communication is made to markets, clients, regulators, and other stakeholders.

According to the IMF, generative AI is “exacerbating the explainability problem”. As currently not all generated output from generative AI can be mapped with any level of accuracy to granular decisions of algorithmic design or training data choices, some techniques used for explainability may not be as useful in a generative AI context. 

Due to this explainability gap, the IMF suggests that financial institutions should limit what generative AI is used for. Currently, the proper domain for generative AI in financial services, according to the IMF, is “recommendations, advice, or analysis, where human actors make decisions and assume the responsibility for them”. 

Regulated financial businesses should continue to develop their general approach towards explainability at technical and organisational levels and modify it when applied to the use of generative AI. This may include adapting policy documents so that steps which should be taken to ensure that appropriate levels of explainability are clearly set out and proportionate to the risks posed by generative AI use cases.

An appropriate level of explainability will often require explainability to be defined and for methodologies for both system and process transparency to be established. It will also require documentation of the trade-offs between disclosure and the advantages gained where complete transparency is not possible.   

Data privacy

In many jurisdictions, it is common that a data protection impact assessment is undertaken before personal data is processed in a new way. This will help determine the legal basis for the data’s use and the limitations on the purposes for which it may be used. As part of this assessment, risk mitigation steps will be identified, including those requiring the removal or deletion of the data once it is no longer required.

Once a dataset with personal data is deleted or no longer accessible, identified risks relating to data privacy often will be reduced or alleviated. This typical path towards data privacy risk management, however, is challenged by the operation of generative AI.

Relying on LLMs, generative AI systems enable the making of inferences even after a training dataset has been discarded or made inaccessible in connection to the system’s use. According to the IMF, this leads to a data leakage risk, with the "AI/ML 'remembering' information about individuals in the training data set after the data are used and discarded”.

To avoid outputs that lead to sensitive data leaks through inference, financial institutions should ensure that their data privacy risk controls address this concern directly. Data protection impact assessments may need to be adapted to obtain information necessary to determine how the risk of misuse of personal data through inference can be effectively managed.

Cybersecurity

The use of generative AI creates its own set of cybersecurity risks. Ones highlighted by the IMF include its potential to be used to conduct more sophisticated ‘phishing’ messages and emails, and for ‘deepfakes’ to result in realistic videos which cause serious damage.  

Generative AI models are also vulnerable to data poisoning and input attacks and jailbreaking or prompt injection attacks. The latter includes "carefully designed prompts (word sequences or sentences) to bypass GenAI’s rules and filters or even insert malicious data or instructions”, the IMF said.

Cybersecurity practices need to evolve to address the specific risks of generative AI. Where third parties are involved in the development or deployment of generative AI on behalf of regulated financial businesses, standard approaches towards contractual protections may need to be adapted to address this risk.

Managing AI risk

Consistent with the IMF’s message, it is likely that financial regulators will continue to increase their efforts in monitoring and conducting surveillance of the risks presented by the use of generative AI. For regulated financial business, there is a need to continue to develop and adapt their approaches towards risk management to ensure that it remains consistent with the principles which influences the expectations set by regulators.

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.