Out-Law News 2 min. read

Legal liability rules need to be expanded to account for artificial intelligence, says expert


The growth of artificial intelligence (AI) will require changes to be made to rules on liability, an expert has said.

Hong Kong-based technology law specialist Francois Tung of Pinsent Masons, the law firm behind Out-Law.com, said AI has the potential to expand the global economy. However, he said it is currently unclear as to who would be held ultimately responsible for wrong decisions taken by a computer if that machine can think on its own.

Tung said the law should recognise manufacturers or users of self-learning machines as ultimately responsible for what those machines do.

"We need to distinguish moral responsibility and legal responsibility," Tung said. "Legal rules on liability have to be expanded to take into account AI. While it may be a matter of philosophical debate whether AI has 'free will', and hence the ability to intentionally cause harm to others and to be held responsible for such action, I believe that humans can never relinquish oversight of computers. Therefore the manufacturer or user of machines should bear ultimate responsibility, after all people create those machines and programme them to work."

Tung was commenting after an AI program developed by Google defeated one of the world's leading players of the game Go for a second time.

Tung said that it is no longer impossible to consider that computers could replace "vital human roles" given their increasing "ability to analyse abstract pattern and ideas and think intuitively".

"Hypothetically, a computer might be able to help with things like disease diagnosis by tapping into a database of knowledge and previous cases and help inform treatment plans for patients," Tung said.

AI could help economies to grow by supporting human creativity by performing more basic tasks faster, better and cheaper, Tung said. Because machines will never be able to fully replicate human creativity, designers, architects and writers, as well as composers and artists, are among the working population least likely to lose their jobs to AI.

Mass job losses as a result of the rise of AI could create social issues in relation to people's physical and psychological well-being that policy makers would have to respond to, Tung said. Policy makers would need to work out how to "apply the benefits of AI" to the economy to support an environment where there are fewer jobs for humans and reduced income tax returns, he said.

"Many years ago transactions at a stock market had to be done by hand," Tung said, "Now we see investment banks engage automated machines, programmed by humans, to conduct high frequency trading to execute transactions within a fraction of a second.  Technologies of this kind will be increasingly common in our economy."

Paul Haswell of Pinsent Masons said: "There are concerns that, by 2050, 50% of the jobs done today will be done by robots, and advancements in AI suggest that this might even be a conservative estimate. This has enormous implications, not just in relation to law and liability, but for society as a whole."

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.