Out-Law / Your Daily Need-To-Know

Out-Law News 3 min. read

New rules for robots backed by European Parliament committee


A committee of MEPs has called for new EU laws to be created to address the liability of robots for damage caused by those machines.

The European Parliament's Legal Affairs Committee voted in favour of a resolution calling for new laws addressing robotics and artificial intelligence (AI) to be set out to sit alongside a new voluntary ethical conduct code that would apply to developers and designers.

The resolution backed by the MEPs is not yet publicly available, but according to details in the European Parliament's statement, its contents will be broadly similar to those contained in a draft report (22-page / 331KB PDF) prepared for the Committee by MEP Mady Delvaux last year.

That draft report called for "a legislative instrument on legal questions related to the development of robotics and artificial intelligence foreseeable in the next 10-15 years" to be set out in the EU, and said that, when working on proposals, the Commission on Civil Law Rules on Robotics should consider whether a "specific legal status for robots" should be created.

"At least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause," the draft report said. It said "electronic personality" could also be applied "to cases where robots make smart autonomous decisions or otherwise interact with third parties independently".

However, a study carried out on behalf of the Committee, which was published last week (34-page / 845KB PDF), said "the idea of autonomous robots having a legal personality" should be disregarded. It said the idea is "as unhelpful as it is inappropriate".

"Advocates of the legal personality option have a fanciful vision of the robot, inspired by science-fiction novels and cinema. They view the robot – particularly if it is classified as smart and is humanoid – as a genuine thinking artificial creation, humanity’s alter ego. We believe it would be inappropriate and out-of-place not only to recognise the existence of an electronic person but to even create any such legal personality. Doing so risks not only assigning rights and obligations to what is just a tool, but also tearing down the boundaries between man and machine, blurring the lines between the living and the inert, the human and the inhuman."

"Moreover, creating a new type of person – an electronic person – sends a strong signal which could not only reignite the fear of artificial beings but also call into question Europe’s humanist foundations. Assigning person status to a non-living, non-conscious entity would therefore be an error since, in the end, humankind would likely be demoted to the rank of a machine. Robots should serve humanity and should have no other role, except in the realms of science-fiction," it said.

The study report said that there are better ways of accounting for the liability of robots than assigning them legal personality.

"Other systems would be far more effective at compensating victims; for example, an insurance scheme for autonomous robots, perhaps combined with a compensation fund," it said.

Delvaux's draft report also said that a "compulsory insurance scheme" should be considered, to force producers or owners of robots to "take out insurance cover for the damage potentially caused by their robots". An underlying compensation fund would help guarantee compensation could be paid to victims of damage caused by robots that are not insured, it said.

The UK government recently set out its plans to introduce a "single insurer model" to address how pay-outs to innocent victims of collisions involving driverless cars should be handled, and how the underlying liability and recovery of costs for those incidents would be governed.

Delvaux's draft report also said new EU laws on robots should provide for "a system of registration of advanced robots", and that businesses should be forced to make disclosures of how many 'smart robots' they use, what savings they make in "social security contributions" by utilising those machines in place of human personnel, and how much revenue they generate from the use of robotics and artificial intelligence.

New laws should also be supported by a voluntary code of conduct, to ensure robots are developed and designed ethically and that "the dignity, privacy and safety of humans" is respected, the Committee said. Robots should also have "kill switches", it said.

The proposals adopted by the Legal Affairs Committee are now scheduled to be voted on by all MEPs in February.

The Committee's vote coincided with the release of the 2017 edition of the World Economic Forum's Global Risks Report, which cited artificial intelligence (AI) and robotics as "the emerging technology with the greatest potential for negative consequences over the coming decade".

The report charted the results of a survey of 745 leaders in business, government, academia and non-governmental and international organisations, as well as members of the Institute of Risk Management.

"AI will become ever more integrated into daily life as businesses employ it in applications to provide interactive digital interfaces and services, increase efficiencies and lower costs," the report said. "Superintelligent systems remain, for now, only a theoretical threat, but artificial intelligence is here to stay and it makes sense to see whether it can help us to create a better future."

"To ensure that AI stays within the boundaries that we set for it, we must continue to grapple with building trust in systems that will transform our social, political and business environments, make decisions for us, and become an indispensable faculty for interpreting the world around us," it said.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.