Out-Law / Your Daily Need-To-Know

Major changes should be made to EU laws on liability, consumer protection and intellectual property (IP) to account for the use of artificial intelligence (AI) in the digital age, according to MEPs.

The European Parliament recently endorsed recommendations made by a special committee it established on AI in a digital age (the AIDA Committee). The recommendations were set out by the committee in a major report in which it warned that the EU is falling behind other countries, like the US and China, in the development of technology standards. It called for the EU to become a global standard setter for AI.

“A clear regulatory framework, political commitment and a more forward-leaning mindset, which are often lacking at present, are needed for European actors to be successful in the digital age and to become technology leaders in AI,” the AIDA Committee’s report said.

“Establishing the world’s first regulatory framework for AI could give the EU leverage and a first-mover advantage in setting international AI standards based on fundamental rights as well as successfully exporting human-centric, ‘trustworthy AI’ around the world,” it said.

The Parliament’s vote in favour of the committee’s recommendations has no practical legal effect, but it signals the willingness of MEPs – in their capacity as EU law makers – to support reforms in several areas in response to the growing use of AI by organisations. It called on EU policymakers to “formulate and adopt a long-term AI industry strategy with a clear vision for the next 10 years”. That strategy could operate as an extension of the existing ‘digital compass’ the European Commission has published, it said.

Some AI-specific legislation is already in the pipeline. The Parliament is in the process of scrutinising the proposed new EU AI Act, which envisages a risk-based approach to the regulation of AI systems on a cross-sector basis. Some of the AIDA Committee’s recommendations are relevant to that initiative.

For example, the Committee called for the classification of AI systems as ‘high-risk’ to be “based on their concrete use and the context, nature, probability, severity and potential irreversibility of the harm that can be expected to occur in breach of fundamental rights and health and safety rules as laid down in Union law”. It also said ‘high-risk’ AI systems should be designed with ‘stop buttons’ to enable humans to “safely and efficiently halt automated activities at any moment”.

Public policy expert Mark Ferguson of Pinsent Masons said: “The AIDA Committee’s report highlights the support that the principles of this legislation enjoys across the European Parliament. However, the challenge laid down by the AIDA committee is for policymakers to act fast to set out EU standards or risk being left behind by international competitors, the US and China.”

Sarah Cameron, who specialises in technology law at Pinsent Masons, said the views of Axel Voss – the MEP who is the Parliament’s rapporteur for the report on AI in the digital age – that the EU should not make laws that are too complex and restrictive, chime with the direction of travel likely to be taken by UK policymakers. The Office for AI in the UK is currently developing a “pro-innovation national position” on governing and regulating AI.

Cameron said: “Voss sees the EU falling behind on AI adoption but the potential for leading on the development of international standards – this is similar to how the UK sees itself as having a leading role in the AI assurance space. The report suggests the two work together on international standards which must make sense.”

“The AIDA Committee report emphasises the importance of context in terms of areas of application and how looking at the benefits and risks in context rather than the specific AI application itself is critical. This approach is very much in keeping with approach likely to be taken in the UK too,” she said.

The Parliament will need to agree on the final wording of the EU AI Act with the Council of Ministers, the EU’s other law-making body, for the proposed legislation to come into force.

Ferguson said: “The AI Act will continue to undergo scrutiny in the European Parliament’s Internal Market and Consumer Protection (IMCO) and the Civil Liberties, Justice and Home Affairs (LIBE) committees before facing a further parliamentary vote in September. The Committees published a draft report in April.”

“The French presidency of the Council of Ministers committed to making AI a priority in its work programme, and will look to build on the progress of the Slovenian presidency which published a compromise text in November 2021,” he said.

With their recent vote, MEPs also endorsed the AIDA Committee’s calls for changes to made to EU laws on liability to account for the use of AI. They do not believe “a complete revision” of the existing laws, such as those concerning product liability, are necessary, but they do think “specific and coordinated adjustments to European and national liability regimes are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation”.

“While high-risk AI systems should fall under strict liability laws, combined with mandatory insurance cover, any other activities, devices or processes driven by AI systems that cause harm or damage should remain subject to fault-based liability,” the AIDA Committee said. “The affected person should nevertheless benefit from a presumption of fault on the part of the operator, unless the latter is able to prove that it has abided by its duty of care.”

The European Commission is expected to put forward proposals for new legislative on AI liability in due course.

Support was also articulated for changes to consumer protection laws with the MEPs’ vote. The law should, for example, entitle consumers to “know whether they are interacting with an AI agent”, as well as “insist upon human review of AI decisions” and give them “means to counter commercial surveillance or personalised pricing”, according to the recommendations they endorsed.

To ensure AI systems are safe and trustworthy, MEPs said a system of pre-market risk self-assessments could be mandated, in tandem with data protection impact assessments. That framework could be “complemented by third-party conformity assessments with relevant and appropriate CE marking” and combined with “ex post enforcement by market surveillance”, the AIDA Committee proposed.

MEPs also suggested they would be willing to back changes to IP laws – an issue that has prompted debate globally. They agree those laws should “incentivise and protect AI innovators by granting them patents as a reward for developing and publishing their creations”.

The wide-ranging report by the AIDA Committee also explored how AI use has particular utility in specific contexts – including in supporting sustainability initiatives and the digitisation of health care.

For example, its report urged for AI to be used “to monitor energy consumption in municipalities and develop energy efficiency measures”, and it highlighted how AI can not only reduce administrative burdens for health care professionals but also enhance medical research, improve the drug development process and facilitate the delivery of more personalised treatments.

In relation to AI in health specifically, MEPs backed calls for more guidance on the processing of health data under EU data protection laws. It said that would help “harness the full potential of AI for the benefit of individuals, while respecting fundamental rights”.

The MEPs also support the development of “a clinical trial-like method to test the adequacy and monitor the deployment of AI in clinical settings” and a new legal framework for online medical consultations.

The MEPs further want the European Commission to explore what steps can be taken to “guard the human brain against interference, manipulation and control by AI-powered neurotechnology”.