Out-Law Analysis 5 min. read

Auditability of AI vital for financial services


In the financial services sector, reliance on artificial intelligence (AI) is becoming increasingly widespread. In addition to ensuring that the technology meets existing legal requirements, financial service providers should review their internal policies, governance frameworks and contracting practices to ensure they align with the latest thinking around the use of AI.

As part of this review, financial service providers should consider the effectiveness of their AI auditing practices. Below we consider what auditability means in the context of AI and the practical steps that can be taken when procuring AI, prior to its deployment and during its use in order to address AI risk.

What is auditability in the context of AI?

Financial service providers can turn to audits to demonstrate their accountability for their AI systems and the outcomes from their use. Auditability refers to the ability of an AI system to be evaluated and assessed – an AI system should not be a "black box". Auditability is closely linked to other central requirements for AI such as explainablity and traceability.

Many financial service providers are under a regulatory obligation to ensure that they and their regulators can obtain effective access to data, and to share certain information with regulators upon request. Access and audit provisions of third party contracts therefore need to enable effective AI audits, in particular where information may be requested or relevant to the objectives of regulators.

Regulators are likely to expect providers' internal audit plans to be based on a methodical risk analysis and take into account expected developments and innovation. Ensuring that AI systems are auditable can enable financial service providers to better demonstrate that they can assess, minimise and manage the risks inherent in adopting AI, both prior to deployment and during use.

There are also the reputational risks to consider if an AI system does not meet current or future public expectations for AI. Existing ethical AI frameworks stress the importance of auditability.

Auditability and the procurement model

How a financial services provider enables auditability of an AI system could depend on the procurement model used. If a financial service provider engages a supplier to develop a bespoke AI system, it might be able to exert more influence over the manner in which the system is designed and built, and be able to address auditability through contractual requirements – for example, through the specification and design requirements.

On the other hand, if a financial service provider procures existing AI technology, it is likely to have less influence over the system's design. In these circumstances, due diligence will play a greater role.

When approaching the market, financial service providers should make clear that auditability is a core requirement for an AI system. They should consider asking specific questions around auditability and review and evaluate proposed solutions on this basis. Providers could also seek independent evaluations at the due diligence stage.

Auditability during design and development

AI systems should be designed and developed in a way that allows them to be audited. From the early design phase, regulators may be interested in assessing the traceability and logging mechanisms for the AI system's processes and outcomes.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

An appropriate audit trail at the design stage evidencing the decisions which have been made in respect of trade-offs will provide greater assurance of compliance with the expectations of regulators

A financial service provider may find that the AI design involves different trade offs, for example, between explainability and statistical accuracy. While a regulator may accept that some AI systems such as those that are based on deep learning can make it hard to follow the logic of the system, they may take the view that the circumstances in which these competing interests cannot be reconciled is limited.

Providers will, in these circumstances, need to balance the extent to which an AI system is explainable with concerns around accuracy. Similar risk decisions will need to be made around other trade offs, such as between accuracy and privacy and explainability and security.

An appropriate audit trail at the design stage evidencing the decisions which have been made in respect of trade-offs will provide greater assurance of compliance with the expectations of regulators. Where a bespoke solution is being developed, including contractual requirements that require the financial service provider be regularly consulted on these design decisions, and to have the ability to input into such decisions, may be what is needed.  

Auditability prior to deployment

As part of assessing whether the AI system is ready for deployment, financial service providers should satisfy themselves of the types of audit and assurance that the system has gone through. The World Economic Forum guidelines for AI procurement recommend using a "a process log that gathers the data across the modelling, training, testing, verifying and implementation phases of the project life cycle". A supplier can therefore be required to demonstrate the auditability of a system before it is submitted for testing and to provide copies of reports showing the outcomes of all testing and evaluation they have conducted. 

Financial service providers should also consider conducting their own tests of the AI system to ensure it is suitable prior to deployment. As a component of this test, consideration should be given to whether or not the system will continue to be auditable over its life. 

The financial service provider will also need to consider whether it has the required skills to be able to properly assess any testing of the AI system. In some circumstances it may be appropriate to hire specialist skills, upskill existing personnel, or engage an independent expert to help evaluate and test the system.

Training and knowledge transfer is critical for financial service providers, to ensure they are able to properly understand and use the AI system, and properly discharge their legal responsibility for the AI system once deployed. This should be reflected in the contractual requirements.

Auditability during use

Financial service providers need the continued ability to audit the AI system once it has been deployed. Audit rights obtained at the contractual stage should enable effective scrutiny of the AI system itself, which may include reviewing its underlying algorithms. It may not always be possible to access the model code, particularly where the code is commercially sensitive, but sufficient information to shed light on the relationship between the model, its input data and model outputs to show how the model is working will likely be required.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

Financial institutions need to be able to evidence and demonstrate how its AI system is compliant with all legal requirements

If AI technology is part of an overall outsourcing, cloud or critical third party arrangement, financial service providers should also consider whether they need to retain the ability to engage independent auditors to audit the AI system. The ability to require modifications to the AI system if an audit identifies any issues, and as new risks and considerations arise, should also be considered.

It may be appropriate to require suppliers to conduct their own audits and evaluations of the AI system over the term of the contract, including on areas of AI risk such as bias and discrimination, and to provide the results of such audits and evaluations to the financial services provider.

Effective information and reporting requirements around the AI system, its use, and the results and outcomes of its use may also be important in order to address risk. This can be ensured through contractual requirements for documentation, such as logs, and reporting, including against service levels and KPIs. Financial institutions need to be able to evidence and demonstrate how its AI system is compliant with all legal requirements.

Conclusion

As part of discharging their responsibility and accountability for an AI system and its outcomes, financial service providers should ensure the AI system continues to be auditable for as long as it is used. This is particularly important in higher risk applications of AI such as automated decision making which impacts individuals. 

If any customer claims arise in the future, financial service providers will need to explain how decisions have been made. Auditability will help demonstrate that an AI system is behaving as it was designed to, and that the results and outcomes produced by it continue to be relevant, appropriate and accurate.

Regulators within the financial services sector are continuing to give their close attention to the use of AI. While guidance may not be specific at this stage in all areas, there are many steps that can be taken to ensure that the regulatory and reputational risks associated with AI can be effectively addressed. 

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.