Out-Law / Your Daily Need-To-Know

Out-Law News 2 min. read

European AI ethics guidelines opened to consultation


New guidelines on the ethical use of artificial intelligence (AI) can help promote innovative new uses of the technology, experts have said.

Earlier this week, the European Commission opened a consultation on draft ethics guidelines for trustworthy AI. The draft guidelines have been developed by the high-level expert group on artificial intelligence (AI HLEG), appointed by the Commission and made up of 52 academics and representatives from businesses and civil society groups.

According to the draft guidelines, AI must have an "ethical purpose" and be "human-centric". AI should comply with "fundamental rights, principles and values", including respecting human dignity, people's right to make decisions for themselves, and of equal treatment, it said.

AI should conform to five principles: beneficence, non-maleficence, autonomy, justice and explicability, the draft guidelines said.

This means should be "designed and developed to improve individual and collective wellbeing", and "protect the dignity, integrity, liberty, privacy, safety, and security of human beings in society and at work" and not "threaten the democratic process, freedom of expression, freedoms of identify, or the possibility to refuse AI services", and that humans should be free from "subordination to, or coercion by, AI systems".

It also means "the development, use, and regulation of AI systems must be fair" and that "the positives and negatives resulting from AI should be evenly distributed". The draft guidelines also promoted transparency in the use of AI, where AI systems are "auditable, comprehensible and intelligible by human beings at varying levels of comprehension and expertise".

"Explicability is a precondition for achieving informed consent from individuals interacting with AI systems and in order to ensure that the principle of explicability and non-maleficence are achieved the requirement of informed consent should be sought," according to the draft guidelines.

The AI HLEG raised concerns about AI, including about its potential to identify people without their consent, be used covertly, facilitate "mass citizen scoring", and be used as a weapon "without meaningful human control".

The AI HLEG, however, published a non-exhaustive list of requirements of trustworthy AI, which promotes accountability, governance, respect for privacy and human autonomy, safety and transparency, among other points.

Technical and non-technical methods to achieve trustworthy AI have also been published in the draft guidelines. Examples include building procedures or constraints into AI systems' architecture, conducting testing and validation exercises, documenting decisions made by AI, adapt key performance indicators, and implement "agreed standards for design, manufacturing and business practices".

The AI HLEG has also published a list of questions AI developers can ask themselves to assess whether they are meeting the requirements for trustworthy AI. Developers should ask questions such as who is accountable if things go wrong; is a 'stop button' foreseen; is the system GDPR compliant; and what would be the impact of the AI system failing, the group said.

Madrid-based technology law expert Miguel Garrido de Vega of Pinsent Masons, the law firm behind Out-Law.com, said that up until now there has been no framework specific to AI for researchers, businesses and regulators to reference their development and use of AI against.

"While a raft of legislation governs use of AI, including in areas such as consumer protection, data protection and product liability, those rules do not address some of the 'big picture' ethical issues that AI raises, such as around its potential to take and shape decisions impacting humans and what safeguards are needed to retain human autonomy and accountability," Garrido de Vega said.

"It is wrong for AI to be developed solely on the basis of restrictive and preventative regulation, as this harms growth and innovation. Equally, though, setting no standards to govern AI's development and use risks tools being made that are harmful, invasive, unethical and even dangerous. This is why new ethics guidelines are to be welcomed," he said.

The draft guidelines are open to comment until 18 January. In March 2019, final guidelines are expected to be handed to the Commission, which will analyse them and propose how to take this work forward.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.