Out-Law / Your Daily Need-To-Know

Risk-based regulation of AI proposed by EU policy makers

Out-Law News | 26 Apr 2021 | 2:35 pm | 4 min. read

Providers and users of artificial intelligence (AI) systems will face new regulatory obligations determined by the risk those systems pose people under plans outlined by the European Commission. Some AI systems will be completely banned from sale or use in the EU.

Under the proposed new Artificial Intelligence Act (AI Act), AI that poses an “unacceptable risk” to people will be prohibited, while the bulk of the regulatory requirements will apply to ‘high-risk’ AI systems, including obligations around the quality of data sets used, record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. ‘Low-risk’ AI systems would be subject to limited transparency obligations.

One of the factors relevant to whether an AI system is characterised as high-risk or not would be the extent of its ‘adverse impact’ on EU fundamental rights

High-risk AI would be subject to both conformity assessment prior to those systems being sold or put into service, as well as a system of post-market monitoring that each systems provider would need to put in place and adhere to.

Compliance with the requirements of the AI Act would also be assessed by national regulators under the Commission’s plans, with companies responsible for the most serious breaches subject to fines of up to €20 million or 4% of their annual global turnover, whichever is higher.

The Commission has proposed to define ‘high-risk’ AI within the AI Act. AI systems that are stand-alone products or used as safety components within a product could fall within the definition.

One of the factors relevant to whether an AI system is characterised as high-risk or not would be the extent of its “adverse impact” on EU fundamental rights.

“Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration,” according to the Commission.

The proposals contain examples of AI systems that would fall within the ‘high-risk’ category. They include AI constituting or forming part of machinery, medical devices and vehicles, used for managing critical infrastructure such as the management and operation of road traffic and the supply of water, gas, heating and electricity, or for the biometric identification and categorisation of people.

AI systems used to inform recruitment decisions or in relation to workplace promotions or terminations, as well as AI systems used to evaluate a person’s credit score or creditworthiness, are also considered high-risk under the plans

Requirements around prior conformity assessments are to be linked to the level of oversight AI systems will have in passing existing regulatory hurdles. If a high-risk AI system would already be considered by “professional pre-market certifiers in the field of product safety”, such as is the case with medical devices or vehicles, then providers of those systems would be free to carry out their own conformity assessment prior to the AI system being sold or being put into service in the EU. All other high-risk AI systems would be subject to third party conformity assessment prior to being put on the market or into service.

Rauer Nils

Dr. Nils Rauer, MJI

Rechtsanwalt, Partner

The scope of the conformity assessment exception might be rather limited in practice

According to the proposals, a new conformity assessment would need to be undertaken “whenever a change occurs which may affect the compliance of the system with [the AI Act] or when the intended purpose of the system changes”. No reassessment of conformity would be necessary where the only changes made to the algorithm and its performance are those that have been pre-determined by the provider and which are assessed at the original conformity assessment.

“The beauty of AI is that it can learn autonomously,” said data expert Nils Rauer of Pinsent Masons, the law firm behind Out-Law. “The concept of pre-determination therefore runs, at least to certain extent, counter to the idea of artificial intelligence. This means the scope of the conformity assessment exception might be rather limited in practice.”

A new European Artificial Intelligence Board would be established under the draft AI Act and tasked with producing opinions, recommendations and guidance in relation to the regulation, while the creation of national AI regulatory sandboxes is also envisaged “to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service”.

Mark Ferguson

Mark Ferguson

Public Policy Manager

The draft AI Act also has to be viewed in the wider context of the Commission’s digital transformation agenda

The AI systems that will be banned under the AI Act include those that deploy “subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm”. Other AI systems which would be prohibited include those designed to exploit and manipulate the behaviour of children or disabled people to cause harm – including “toys using voice assistance encouraging dangerous behaviour of minors”, according to the Commission.

The use of ‘real-time’ remote biometric identification systems in public by law enforcement agencies will also be subject to a general ban, though exceptions apply to enable its use for specific purposes, like combatting a risk of terrorism, and under strict conditions – including judicial oversight. There would also be a ban on certain uses of AI systems by governments and other public bodies for the purposes of “social scoring”.

Public policy expert Mark Ferguson of Pinsent Masons said that there is a long process of legislative scrutiny for the draft AI Act to pass through before it becomes law.

Ferguson said: “Firstly, the European Parliament has to decide which committee will take the lead role in scrutinising the legislation. A number of committees have recently produced reports on AI and they will all likely be keen to undertake this role. However, even if MEPs reach consensus on the text, the legislation will not come into force without the support of the Council of Ministers, the EU’s other law making body made up of representatives of the member state governments – reconciling the views and objectives of 27 member states has often taken years in the case of major new legislation such as this in the past.”

“The draft AI Act also has to be viewed in the wider context of the Commission’s digital transformation agenda. It follows on from the Commission’s wide-ranging new digital strategy and white paper on AI published in early 2020 that explored options for a legislative framework for trustworthy AI, and considered what further action may be needed to address issues of safety, liability, fundamental rights and data. The digital strategy, however, also envisages other new EU legislation, including the Digital Services Act, the Digital Markets Act, and the Data Governance Act and so is part of a much wider portfolio of reform in the digital single market,” he said.