Robots might have 'electronic persons' status under future EU laws

Out-Law News | 20 Feb 2017 | 10:23 am | 3 min. read

Advanced robots of the future could be given their own legal status under plans MEPs have asked EU policy makers to consider.

The European Parliament has passed a resolution which calls on the European Commission to put forward proposals for new EU legislation that addresses "legal questions related to the development and use of robotics and AI foreseeable in the next 10 to 15 years", together with accompanying guidance and codes of conduct to cover issues such as the ethical design and operation of such machines.

In its resolution, MEPs said the Commission, as part of its proposals, should consider whether "a specific legal status for robots in the long run" should be provided for "so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently".

The recommendation was included in the resolution despite the concept of conferring legal personality on robots being dismissed in a previous study conducted on behalf of the parliament's Legal Affairs Committee. The study said the idea is "as unhelpful as it is inappropriate".

"Advocates of the legal personality option have a fanciful vision of the robot, inspired by science-fiction novels and cinema," it said. "They view the robot – particularly if it is classified as smart and is humanoid – as a genuine thinking artificial creation, humanity’s alter ego. We believe it would be inappropriate and out-of-place not only to recognise the existence of an electronic person but to even create any such legal personality. Doing so risks not only assigning rights and obligations to what is just a tool, but also tearing down the boundaries between man and machine, blurring the lines between the living and the inert, the human and the inhuman."

"Moreover, creating a new type of person – an electronic person – sends a strong signal which could not only reignite the fear of artificial beings but also call into question Europe’s humanist foundations. Assigning person status to a non-living, non-conscious entity would therefore be an error since, in the end, humankind would likely be demoted to the rank of a machine. Robots should serve humanity and should have no other role, except in the realms of science-fiction," it said.

The European Parliament resolution builds on an earlier report that MEP Mady Delvaux produced for the Legal Affairs Committee last year.

Among other things, the resolution contains recommendations on how robot liability could be addressed prior to any new 'electronic persons' status being established. It said a strict liability regime could be established, or, alternatively that a risk management approach could be provided for under new laws.

"Strict liability requires only proof that damage has occurred and the establishment of a causal link between the harmful functioning of the robot and the damage suffered by the injured party," it said. "The risk management approach does not focus on the person 'who acted negligently; as individually liable but on the person who is able, under certain circumstances, to minimise risks and deal with negative impacts."

Whichever approach is selected, the new laws "should in no way restrict the type or the extent of the damages which may be recovered, nor should it limit the forms of compensation which may be offered to the aggrieved party, on the sole grounds that damage is caused by a non-human agent", the resolution said.

It also outlined the potential for a mandatory insurance scheme to be applied to force producers or owners of robots to "take out insurance cover for the damage potentially caused by their robots", and that an underlying compensation fund could be set-up to provide for damages pay outs in cases where insurance cover is absent.

The extent of a robot developer or operator's liability could also depend on how "the actual level of instructions given to the robot and of its degree of autonomy", it said. However, it made a distinction between a robot's programming and its capacity for self-learning.

"The greater a robot's learning capability or autonomy, and the longer a robot's training, the greater the responsibility of its trainer should be," the resolution said. "In particular, that skills resulting from 'training' given to a robot should be not confused with skills depending strictly on its self-learning abilities when seeking to identify the person to whom the robot's harmful behaviour is actually attributable."