Out-Law / Your Daily Need-To-Know

Out-Law Analysis 5 min. read

Meeting AI transparency requirements in financial services


Transparency is central to the development and use of AI. It generally requires information to be made available on an AI system’s logic, as well as how it was designed, developed, and deployed.

As policymakers and regulators in the UK and EU grapple with how to achieve AI transparency, businesses should consider how they might address the challenge of complying with new disclosure requirements while protecting their commercial interests.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

Finding a balance between transparency, protecting proprietary interests and conflicting customer needs may require that information be limited in some circumstances

The FCA / Alan Turing Institute view

The FCA and Alan Turing Institute issued a report this year that highlights the importance of access to information to demonstrate that AI is trustworthy and used responsibly. In the report, transparency is described as “relevant stakeholders having access to relevant information”.

The report distinguishes between “system logic information” and “process information” and sets out the need to identify: the types of information that are relevant; who the relevant stakeholders are; and why those stakeholders are interested in information about an AI system.

Types of relevant information: system logic

System logic information, according to the FCA and Alan Turing Institute, is information that relates to how the AI system operates or works, i.e., the “inner workings” of a system. It may include information about data inputs or the link between the AI’s inputs and outputs.

System logic information can be used to understand and improve the reliability and robustness of an AI system. It can also be used to take corrective action and address concerns when the system is in operation and about the outputs of a system.

This information is not limited to the source code or other core proprietary information which may be protected by intellectual property rights or trade secrets. It may also include information that can be obtained through the testing of datasets, processes and the system itself.

Some system logic information might be obtained through observation during the lifecycle of a system, when human oversight processes are effectively implemented.

System logic information may help with demonstrating compliance with legal and regulatory requirements. The FCA and Alan Turing Institute explain, for example, that understanding a system’s logic “can be critical to avoiding unlawful discrimination; ensuring the adequacy of systems used in prudential risk management; assessing the extent to which trading systems may entail risks of insider trading or market manipulation; determining the potential of anti-competitive outcomes in systems used for pricing; or avoiding the unlawful processing of personal data”. 

Where customers make a request about a decision, system logic information may also assist. It may provide evidence that decisions are made in “non-arbitrary and methodologically sound ways”.

In some circumstances, such as in relation to credit or insurance underwriting decisions, it may allow customers to understand the effect their behaviour may have on the decisions a system makes.

Relevant process information – process information

Process information focuses on access to any information about an AI system’s design, development and deployment which is not system logic information. It may include information relating to the training of users, data management practices and governance arrangements. 

The FCA and the Alan Turing Institute set out how process information may be gathered during each stage of the design and development of an AI system. This is primarily through the performance of risk assessments at each stage.

At the business case and problem definition stage, for example, it is expected that there will be an assessment of the need to use the AI system and for the extent to which it will be in use. Assessments will then need to be made when preparing system requirement specifications, approaches to data acquisition and preparation, when developing, evaluating and selecting models and performing validation and verification assessments.

Process information can assist with addressing concerns about an AI system, its trustworthiness and responsible use. The information could be useful to those involved in making decisions about an AI system’s development and use, as well as those interested in the system’s performance – such as audit teams, board members, and regulators.

The relevant stakeholders

Different levels of transparency and access to information will be required depending on the perspective of the person with whom the information is shared. Auditors, regulators, customers and other end users cannot all be treated the same.

Financial services businesses, as customers of AI systems, may require higher levels of transparency and volumes of information to achieve regulatory compliance than other customers of AI systems. Providers of AI systems should therefore consider the extent to which they are required to explain their systems without exposing commercially sensitive information. 

Consumers and end users may require a different type of information. For example, they may not need information on the system’s internal workings but only information necessary to understand whether correct outcomes have been made and whether their privacy and other rights have been respected.

Finding a balance between transparency, protecting proprietary interests and conflicting customer needs may require that information be limited in some circumstances. Providers of AI systems should therefore assess the effect on the stakeholder to which information is disclosed. The information provided should be in a format that is accessible for, and of value to, the relevant stakeholder to ensure that the disclosure enables the stakeholder to exercise their rights or achieve regulatory compliance, as appropriate, rather than have a negative impact.

EU’s draft AI regulation

The European Commission’s draft regulation on AI includes requirements for ‘high risk’ AI systems to be transparent. High risk systems must be designed and developed to “enable users to interpret the system’s output and use it appropriately”. They must also be accompanied by instructions in an appropriate format that include “concise, complete, correct and clear information that is relevant accessible and comprehensible to users”.

The draft regulation sets out the details that must be included in the instructions provided. This includes the characteristics, capabilities and limitations of performance of the high risk AI system, such as the level of accuracy, robustness and cybersecurity against which the high risk system has been tested and validated, its intended purpose, performance, and, in some circumstances, specifications for input data or other relevant information in respect of training, validation and testing of data sets. Information on the human oversight measures in place and expected lifetime of a high-risk system should also be provided.

The European Commission considers some systems as lower risk. In circumstances where users interact with a chatbot or their emotions or characteristics are recognised through automated means, they are to be informed that they are interacting with an AI system, unless this is clear from the circumstances. If an AI system generates or manipulates images, audio, or video content that “appreciably resembles authentic content”, providers of AI systems will need to disclose that content to users through automated means, subject to specific exceptions.

A balancing act

The importance of AI transparency can be clearly seen in recent regulatory developments, and such guidance can be used to inform best practice for providers and users of AI systems. However, achieving a balance between regulatory compliance, stakeholder interests and commercially sensitive information may be challenging.

Providers should consider how they can assess the appropriate levels of information for different stakeholders to ensure AI transparency is helpful not harmful to those subject to AI decision making.

Co-written by Priya Jhakra of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.