Out-Law / Your Daily Need-To-Know

Out-Law Analysis 7 min. read

The challenges with data and AI in UK financial services


Financial services businesses should review the way they procure, manage and use data, and consider whether specific new processes need to be developed, to implement artificial intelligence (AI) systems in a way that customers trust, is effective and meets legal and regulatory standards.

The importance of data to financial services and, in particular, to the effective and trusted implementation of AI in the sector, was acknowledged at a recent meeting of UK authorities, financial institutions, technology companies and other stakeholders.

The Artificial Intelligence Public Private Forum

The Financial Conduct Authority (FCA) and Bank of England recently hosted the second meeting of the Artificial Intelligence Public Private Forum on 26 February 2021. Attendees at the meeting included representatives from Aviva, HSBC, Google, Microsoft, the Information Commissioner’s Office, the Treasury and the UK Office for AI.

Discussion at the meeting centred on the issues and challenges surrounding data quality, data strategy and economics, data governance and ethics, and data standards and regulation.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

The quality of the output of an AI will only be as reliable and accurate as the data input

Data quality

The Forum identified a number of challenges around the use of 'alternative data' – that being unstructured, synthetic, aggregated and third-party data – and the issue of adapting existing data quality standards for AI, noting that the root causes of these issues can differ depending on the maturity of an organisation's use of AI.

Financial services businesses adopting and implementing AI must consider how they can address and mitigate these challenges. 

Businesses should have clear due diligence processes for assessing the sources from which data is obtained and should ensure that they use third party data providers that are of standing and repute. Third party providers with the expertise and skill required to collect and provide data and which have an interest in doing so, are likely to provide better quality data. The quality of the output of an AI will only be as reliable and accurate as the data input.

Use of unstructured data also creates challenges. The operation of an AI system may generally be more accurate and sophisticated where it has more data to work with provided the data is of good quality.

Data quality challenges arising from ‘data lakes’

Unstructured data provided by a third party may originate from a 'data lake' – the use of on-premise and cloud-based data lakes are becoming more common within organisations. As the European Banking Authority has previously highlighted, however, one of the most common challenges with unstructured data and data lakes is the ability to find, understand and trust the data that is needed due to the data being in a format that is not understandable or even conflicting.

To ensure that the data used is of sufficient quality to provide reliable outputs, organisations will need to thoroughly clean and prepare the data and convert into a form that can be interpreted and understood by not only the AI system using the data but also the individuals supervising the AI functions. This may be problematic where the data is incomplete, inconsistent, inaccurate and includes errors.

Businesses procuring data from third parties should consider how these challenges can be addressed contractually. They could, for instance, seek to require third party providers to provide assurances around transparency, data quality and that data is lawfully collected.

 

AI-specific processes could improve data quality

Many organisations exploring the use of AI will likely look to existing internal processes to meet challenges around data quality. However, organisations must consider whether such processes are suitable, appropriate and fit for purpose in an AI context, and where they are not, ensure that AI specific governance processes are developed.

The Forum discussed how use of existing processes could create problems where “existing control frameworks cannot scale to the volume and variety of features in the data or the range of applications of the AI model”. Work by the Alternative Data Council on best practice and data quality standards for alternative data was highlighted by the Forum as a useful tool for financial services providers using data and AI.

Prioritising this consideration of processes at the outset should help financial services businesses reduce the risk of issues arising later in the AI lifecycle. It would assist with continued compliance with relevant industry principles, ethical standards and regulatory requirements and help in working towards cost effective adoption of AI too.

It was also noted by the Forum that AI models, unlike traditional models, may need continuous monitoring, and that data quality may differ significantly depending on a source's interest in sharing data, cost and approaches to open data sharing.

The nature of AI-related services and products offered in financial services – from chatbots to tools offering assistance with credit applications, providing insurance quotes, and data analysis – will commonly entail the processing of personal data
The role of standards in improving data quality

The increasing reliance on data increases the importance of data quality and the standards that should be adhered to. The Forum identified that there is a lack of consensus on data standards in the financial services sector, including agreement on good practice, and that there may be challenges in applying existing data standards to AI. It was noted that data standards developed as part of the open banking regime may be useful when using AI in financial services.

When looking at data quality, financial services businesses must ensure that any data collected and used, whether from customers or third parties, always complies with existing legislation and regulatory principles relating to data. In particular, compliance with data protection law is a core requirement, as the nature of AI-related services and products offered in financial services – from chatbots to tools offering assistance with credit applications, providing insurance quotes, and data analysis – will commonly entail the processing of personal data.

When thinking about deploying AI or other new technologies that will involve the processing of personal data, businesses should first carry out a data protection impact assessment. A review and any necessary update of internal audit processes is also advised and can assist with ensuring compliance with data protection principles including those relating to anonymised and pseudonymised data where data is aggregated and used or sourced from third parties. We can see a trend in best practice and non-binding guidance in this area, including from the UK Information Commissioner's Office, and the possibility of further guidance – this was suggested in the recent Kalifa review of UK fintech where the creation of new guidance to help UK fintechs understand how financial services regulatory rules apply to AI and how AI should be used in the context of UK data protection laws was recommended. 

Additionally, organisations must ensure that they review internal processes and standards so that AI use does not impact the ability to comply with other regulatory requirements, such as the European Banking Authority's guidelines on outsourcing where relevant in respect of data and third parties. Increased data sharing and use of data by AI systems to improve financial services must align with a financial institution's existing regulatory obligations.

Data strategy

The Forum also looked at the need to create a framework for auditing AI, noting that there is currently no best practice for doing so. Responsibility for creating a framework, the scope of audit and the subject of audit – whether the AI model, data or both – were also discussed.

In the absence of universal standards on auditing AI, financial services providers should look at how they can ensure that the AI system it is using can be adequately evaluated and assessed, noting the need to ensure transparency and explainability. Consideration should be given as to whether AI is auditable at all stages in the AI lifecycle from development of models and data sets, deployment and AI outputs.

The Forum recognised that current financial audit and control frameworks could assist with developing good practice for controls and use of data relating to AI. A potential option for creating a risk management and governance framework for AI-related issues was also suggested, such as developing AI risk principles and mapping them against an organisation's existing frameworks.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

The prospect of EU reforms accelerates the need for consensus within the financial services sector in the UK on adopting similar standards and/or principles on ethical AI

Data governance, ethics and standards

Use of AI in accordance with ethical principles such as fairness and bias could raise challenges for financial services providers. The Forum discussed how ethical concepts could be complicated by a lack of consensus of their requirements and how they are defined. It was also highlighted that organisations need to have the right skillsets to discuss AI and ethics and to ensure that they are implemented at all levels of an organisation, from the leadership to all employees. There was also discussion as to whether ethical principles should apply not only to treatment of individuals but also entities.

The European Commission is expected to publish proposed new EU regulations on AI on 21 April 2021. The regulation is expected to have ethics at the core. The prospect of EU reforms accelerates the need for consensus within the financial services sector in the UK on adopting similar standards and/or principles on ethical AI to allow for continuity of service and opportunities for financial services across the UK, Europe and globally.

Regulation

When considering data standards, risk management and auditing, the Forum looked at 'what changes with AI' to help shape discussion around regulation.

It was identified that the approach to regulating AI can differ depending on the types of models used, such as those that have a high level of explainability and unexplainable models. Context, use-case and materiality of the model can determine which approach is more appropriate.

The Forum recognised that “having clear guidelines could increase confidence when deploying the technology, but could equally hamper desirable innovation”. It added that “striking the right balance is a key consideration for regulators and policy makers”.

As with all regulation, it is also important to recognise that approaches could differ from sector to sector and jurisdictions. Financial services providers should ensure that internal processes are geared up to adapt to the possibility of different standards in respect of different jurisdictions, as well as how such standards will be applicable or have effect in accordance with local laws, particularly where data to be used by AI systems is sourced from or transferred to another jurisdiction.

Financial institutions may encounter problems if they attempt to retrofit existing processes and systems to fit AI requirements

Status quo will not last

Current policies and regulatory frameworks in place do not meet the wide and ever-changing needs of AI. As such, financial institutions may encounter problems if they attempt to retrofit existing processes and systems to fit AI requirements.

As the Forum recognised, public confidence is vital to the adoption of AI and other new technologies. The FCA and Bank of England are well known for supporting technology-driven innovation and, as they have put it, “are working to understand the changing role of data and data-driven technology – not just in terms of what it means for markets but also in harnessing the benefits to become more effective, data-enabled regulators”.

With the European Commission expected to announce its new legal framework for AI later this month and the UK government recently announcing its preparation of a new AI strategy in the UK, the Forum's second meeting was timely and the discussion from it highlighting the data-related challenges the financial services sector is grappling with arguably shows why an appropriate policy response may be regulation in the future.

Co-written by Priya Jhakra of Pinsent Masons.
We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.