Out-Law / Your Daily Need-To-Know

Out-Law Analysis 4 min. read

Financial services – defining AI for future regulation


Financial services firms should review the technologies they use to determine whether they will be classed as ‘artificial intelligence’ (AI) tools for the purposes of UK and EU regulation.

Firms can expect to hear soon, in a white paper to be published by the Office for AI, whether general AI-specific regulation will be introduced in the UK.  EU law makers are currently scrutinising separate plans for a draft new EU AI Act. Both developments are expected to focus on issues such as transparency, explainability and governance.

However, any new rules would only apply to technology that fits within the definition of AI in new legislation or regulation. Figuring out whether the technology firms use will be in-scope is therefore an important preliminary task for financial services businesses.

Defining AI

AI is a broad concept but with no universally applied definition. What distinguishes AI from other software and technologies is the ability for AI systems to mimic and learn human like behaviours. AI technologies are able to analyse their environment and, as Singapore’s Personal Data Protection Commission has put it, “seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning”. AI systems operate with different levels of autonomy depending on the outcomes sought from their use.

Many definitions of AI applied by regulators include similarities, such as references to learning or the replication of human behaviours.

The European Commission in its draft AI Act, refers to “artificial intelligence systems”, as “software that is developed with one or more of the techniques [listed in Annex 1 of the regulation]…and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.

The UK government describes AI as “machines that perform tasks normally requiring human intelligence, especially when the machines learn from data how to do those tasks”. The AI Public Private Forum (AIPPF) – a forum set up by the Bank of England and the Financial Conduct Authority (FCA) – recently, in a report, adopted the International Organisation for Standardization (ISO) definition of AI which describes AI as “an interdisciplinary field, usually regarded as a branch of computer science, dealing with models and systems for the performance of functions generally associated with human intelligence, such as reasoning and learning”.

While the financial services sector is a long way off using systems which are fully autonomous, specifically in customer facing environments, the sector is increasingly adopting technologies that carry out functions traditionally performed by human users. It is important therefore for financial services to firstly consider if the technologies they are using fall within the regulatory and government view of AI, and if so, what they need to have in place to regulate internally the use of these technologies to ensure compliance with existing legislation and future AI regulation.

Using AI

The first step to working towards compliance is mapping out any AI use within an organisation. Financial services businesses should look to identify whether AI technologies are being used in their organisation by looking at “the characteristics of AI applications and how they differ from non-AI applications that produce the same result, as the AIPPF has recommended.

The AIPPF identify characteristics such as “the complexity of AI, its iterative approach, the use of hyperparameters, and the use of unstructured datasets” as elements which can determine whether AI technologies are being used. Germany’s central bank and financial regulator, the Bundesbank and BaFin respectively, have taken a similar approach. In doing so, they have avoided specifically defining AI or machine learning and instead set out various properties associated with machine learning which can help identify what technologies are considered within or excluded from the scope of AI.

Governing AI

Where financial services businesses determine that AI is being used, they should consider whether they are equipped to manage and control AI use within their business in a way which is compliant with regulation. Reviewing existing governance processes to check if they are appropriate and robust enough to accommodate specific nuances of AI systems compared against other technologies will be important.

Current policies and frameworks in place may not meet the wide and ever-changing needs of AI. As such, financial services businesses may encounter problems if they attempt to retrofit existing processes and systems to fit AI requirements. Where governance models and processes are not in place, businesses should look to implement these processes in preparation for future regulation of AI. An absence of such measures may lead to non-compliance and undermine customer trust in an organisation’s use of AI.

As well as establishing AI governance processes, financial services businesses should review the way they procure, manage and use data, and consider whether specific new processes need to be developed to implement AI systems in a way that customers trust, is effective and meets legal and regulatory standards. Identifying and addressing challenges with data quality and data governance can help with this. Consideration should also be given to whether the right skillsets and personnel are available to establish or review a governance model and implement these models throughout an organisation.

Governance of AI systems and the data sets used by the systems can also highlight any deficiencies in the system and/or process, allowing organisations to put in place the relevant processes to mitigate or avoid non-compliance.

Expect more developments

Anticipated regulatory developments, such as with the EU AI Act and the fleshing out of the UK’s approach to AI governance and regulation, reflect the desire of policymakers to drive investment in AI and build public confidence in the use of the technology.

Upcoming regulation is likely to cover a wide range of AI use across various sectors, including the financial services sector, and so governance of AI use will be central to preparing for compliance with the regulation.

While the European Commission’s AI Act is expected to focus primarily on high-risk AI and use of AI for law enforcement and by public authorities, specific requirements are also expected to apply to financial services businesses that are using chatbots in customer facing environments or using AI for credit scoring and/or reference checking. In the UK, the Bank of England and FCA have also committed to providing “clarity around the current regulatory framework and how it applies to AI” in the financial services sector in a new discussion paper which will be published later this year, and the UK government is expected to set out its approach to AI regulation through the Office for AI in a white paper in the coming months.

For financial services businesses, the regulatory and legal developments expected this year will shape the way they implement AI. It will also support the necessary continued focus on building consumer trust in AI by ensuring that the technology is used ethically, transparently and with human needs and rights at the forefront.

Co-written by Priya Jhakra of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.