Out-Law / Your Daily Need-To-Know

Out-Law News 2 min. read

UK government urged to establish new Commission on Artificial Intelligence


A new Commission on Artificial Intelligence (AI) should be set up in the UK to "examine the social, ethical and legal implications of recent and potential developments in AI", a committee of MPs has recommended.

The UK government should base the new Commission at the Alan Turing Institute, the Science and Technology Committee said in a new report.

The Commission should comprise members from a wide range of backgrounds, including experts in the field of law, social science and philosophy, computer scientists, natural scientists, mathematicians and engineers, as well as representatives from industry, non-governmental organisations and the public, it said.

The Committee said: "[The Commission] should focus on establishing principles to govern the development and application of AI techniques, as well as advising the government of any regulation required on limits to its progression. It will need to be closely coordinated with the work of the Council of Data Ethics which the government is currently setting up."

In its report the Committee criticised the UK government for not setting out a "strategy for developing the skills, and securing the critical investment, that is needed to create future growth in robotics and AI" or for fulfilling its promise to set up a Robotics and Autonomous Systems (RAS) Leadership Council.

Potential "productivity gains" that could be derived from the greater use of AI and robotics could go "unrealised" if the UK government does not set out a "strategy for the sector", it warned.

"The government should, without further delay, establish a RAS Leadership Council, with membership drawn from across academia, industry and, crucially, the government," the Committee said. "The Leadership Council should work with the government and the Research Councils to produce a government-backed ‘National RAS Strategy’; one that clearly sets out the government’s ambitions, and financial support, for this ‘great technology’. Founding a ‘National RAS Institute’, or Catapult, should be part of the strategy."

The government was also urged to do more to develop people's skills to account for advances in robotics and AI.

The Committee said: "While we cannot yet foresee exactly how this fourth industrial revolution will play out, we know that gains in productivity and efficiency, new services and jobs, and improved support in existing roles are all on the horizon, alongside the potential loss of well-established occupations. Such transitions will be challenging. As a nation, we must respond with a readiness to re-skill, and up-skill, on a continuing basis."

"This requires a commitment by the government to ensure that our education and training systems are flexible, so that they can adapt as the demands on the workforce change, and are geared up for lifelong learning. Leadership in this area, however, has been lacking," it said.

The Committee urged the UK government to publish a new digital strategy with a commitment in it to address digital skills "without delay".

The Committee also said that legal and ethical issues arising from advances in AI and robotics must be considered as a matter of priority.

It said: "While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now. Not only would this help to ensure that the UK remains focused on developing ‘socially beneficial’ AI systems, it would also represent an important step towards fostering public dialogue about, and trust in, such systems over time."

Earlier this year, Hong Kong-based technology law specialist Francois Tung of Pinsent Masons, the law firm behind Out-Law.com, said the growth of AI will require changes to be made to rules on liability.

Tung said the law should recognise manufacturers or users of self-learning machines as ultimately responsible for what those machines do.

"We need to distinguish moral responsibility and legal responsibility," Tung said. "Legal rules on liability have to be expanded to take into account AI. While it may be a matter of philosophical debate whether AI has 'free will', and hence the ability to intentionally cause harm to others and to be held responsible for such action, I believe that humans can never relinquish oversight of computers. Therefore the manufacturer or user of machines should bear ultimate responsibility, after all people create those machines and programme them to work."

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.