Out-Law / Your Daily Need-To-Know

Out-Law Analysis 6 min. read

What EU plans for an AI Act mean for financial services


Financial services firms providing or using artificial intelligence (AI) tools face stricter regulation around their activities under recent proposals for new regulation published by the European Commission.

The focus of the draft AI Act is the creation of harmonised rules for a proportionate, risk-based approach to AI in Europe, but it will impact use and development of AI systems globally, including within the financial services sector.

The regulation, if introduced in its current form, would introduce:

  • a strict regime and mandatory requirements for ‘high risk’ AI, such as AI systems used to evaluate creditworthiness or establish credit scores;
  • limited requirements for specific types of AI, such as chatbots; and
  • a ban on certain uses of AI, such as AI systems which deploy subliminal techniques beyond a person’s consciousness.

Scope and application

The proposed new legal framework would apply to all sectors, public and private, including financial services. It extends to providers and users located in the EU as well as those based in other jurisdictions. The regulation would not apply to private, non-professional use of AI.

The proposed new regulation is intended to apply to:

  • providers placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are established within the EU or in a third country;
  • users of AI systems located in the EU; and
  • providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the EU.

There is no universally accepted definition of AI, and the term is construed differently across various industries, sectors and jurisdictions. The European Commission has proposed a ‘technology neutral’ definition of ‘artificial intelligence system’ which is intended to be flexible and future proof to account for developments in the technology. That definition is “software that is developed with one or more [specified] techniques and approaches [e.g. machine learning] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.

Application to financial services

Providers

A financial institution procuring the development of an AI system or tool with a view to placing it on the market or putting it into service under its own name or trade mark, will be considered a ‘provider’ of AI and would be required to comply with the applicable regulations. This would also be the case where a financial institution or provider develops its own AI system.

Users

As well as complying with requirements as a provider of AI, financial institutions using, rather than developing, high risk AI systems would also be required to adhere to the obligations placed on users of AI. This includes ensuring systems are used in accordance with the instructions of use accompanying the systems, implementing human oversight measures indicated by the provider, and ensuring that input data, where the user exercises control, is relevant to the intended purpose of the high risk AI system.

For credit institutions, obligations in respect of monitoring the operation of high risk AI and keeping logs automatically generated by a high risk AI system would be satisfied and/or governed by complying with the rules on internal governance arrangements, processes and mechanisms under the EU’s 2013 directive on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms. However, the directive only sets out very general requirements in relation to governance arrangements that may fall within scope of the draft AI Act.

Financial institutions therefore need to consider the extent to which their existing processes would need to be adapted if more specific governance requirements are set out in the final AI Act.

Requirements for financial services

The draft AI Act sets out requirements on various uses of AI, some of which are specific to certain types of financial services. AI systems used to evaluate creditworthiness or establish credit scores, for example, would be subject to the mandatory requirements for high risk AI. Transparency requirements, including informing individuals that they are interacting with an AI system, would also apply to the use of chatbots.

A financial institution would also be required to comply with the ‘high risk’ requirements, including the need for human oversight, where it uses AI systems for:

  • recruitment purposes including advertising vacancies and screening applications; and
  • for making decisions on promotion and termination of work-related contractual relationships, task allocation and monitoring and evaluating performance and behaviour.

While the requirements in the proposed new regulation are limited to certain uses of AI in financial services and would not cover all AI use in the sector, the proposals encourage organisations to voluntarily develop and adopt codes of conduct which incorporate the mandatory requirements for ‘high risk’ AI and apply such requirements to all uses of AI.

Supervision

The Commission has proposed that a new European Artificial Intelligence Board be established to provide advice and assistance to it in respect of cooperation with national supervisory authorities, with the aim of ensuring consistent national application of the finalised rules and contributing guidance on issues across the EU on matters relating to the planned new framework. Each EU member state would also required to designate one or more national competent authorities to supervise the application and implementation of the regulation, as well as carry out market surveillance activities.

AI systems may also be subject to market surveillance rules – these being rules which aim to ensure the safety of products sold within the EU and that such products comply with EU law. For AI systems placed on the market, put into service or used by financial institutions regulated by EU legislation on financial services, the market surveillance authority will be the relevant authority responsible for financial supervision of those institutions under that legislation.

For AI systems provided or used by regulated credit institutions, the Commission has proposed that existing authorities responsible for their supervision under the EU’s financial services legislation would take on the role of supervising those institutions’ compliance with the finalised AI Act. The aim of this is to ensure enforcement of the obligations under the AI Act align with the EU financial services legislation where AI systems are to some extent implicitly regulated in relation to the internal governance system of credit institutions.

Enforcement

Administrative fines of the magnitude provided for under the General Data Protection Regulation would be payable by financial institutions where their high risk AI systems fail to comply with the requirements of the AI Act, according to the proposals. Each EU member state would be responsible for implementing the proposed penalties framework into national law, taking into account the thresholds set out in the AI Act. As drafted, the proposed maximum fines are:

  • up to €30m or 6% of the total worldwide annual turnover of the preceding financial year, whichever is higher, for infringements on prohibited practices or non-compliance related to requirements on data;
  • up to €20m or 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher, for non-compliance with any of the other requirements or obligations of the regulation;
  • up to €10m or 2% of the total worldwide annual turnover of the preceding financial year, whichever is higher, for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request.

Proposals tabled by MEPs in October 2020, which would have required providers of high risk AI to provide compensation for significant immaterial harm that resulted in ‘verifiable economic loss’, have been not been included by the Commission in its draft AI Act.

The UK and regulation

Following the UK’s departure from the EU, the AI Act, if implemented, would not directly apply in the UK. However, where financial institutions or providers based in the UK look to launch or use AI systems in the EU, or where outputs of a UK-based AI system are used in the EU, the regulation’s requirements would apply. Financial institutions in the UK should therefore begin to look at how they can ensure their development and use of high risk AI systems by its UK businesses and globally align with the requirements set out by the Commission.

While the UK does not have a specific AI regulation of its own, many requirements set out in the EU proposals echo principles set out in existing UK legislation, regulation and guidance applicable to financial institutions. This includes those relating to transparency and data under the Financial Conduct Authority’s principles for business, and the Data Protection Act 2018. Any future AI specific legislation or regulatory guidance in the UK will likely be influenced by the EU’s plans for a new AI Act.

Co-written by Priya Jhakra of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.