Out-Law News | 10 Apr 2019 | 9:23 am | 2 min. read
The new guidelines for trustworthy AI have been developed by a high-level expert group on artificial intelligence (AI HLEG). The group was set up by the European Commission, and consisted of 52 academics and representatives from businesses and civil society groups. The new guidelines were published earlier this week.
The Commission said a pilot phase to test the new guidelines will be commenced in the summer. Companies, public administrations and organisations can participate by signing up to the European AI Alliance. Priya Jhakra, a lawyer at Pinsent Masons, the law firm behind Out-Law.com, said businesses would welcome the inclusion of more practical recommendations on the actions they should consider when designing and implementing AI systems.
According to the new guidelines, trustworthy AI should be lawful, ethical and robust throughout the system's entire lifecycle. The expert group has put forward seven central principles which AI systems should adhere to, which include championing human agency and oversight, robustness and safety, privacy and data governance, transparency, non-discrimination, societal and environmental well-being, and accountability.
Central recommendations include that organisations should be transparent with users of AI that they are dealing with an AI system each time they interact with those systems, and provide details about "the AI system’s capabilities and limitations, enabling realistic expectation setting, and about the manner in which the requirements are implemented", in a clear and proactive manner.
In addition, those deploying AI should be able to explain the technical processes of an AI system and related human decisions, and those systems should be "human centric" and free from bias and discrimination.
"The guidelines show the EU’s efforts to become a leader in setting standards for ethical use of AI, and while they are not legally binding they could help shape future legislation," Priya Jhakra of Pinsent Masons said. "Although the guidelines focus on ethics and robustness and not lawful AI, they are useful in pointing to existing EU laws and regulations that apply to AI, not least the General Data Protection Regulation, product liability laws and consumer protection legislation."
The finalised guidance also includes new features that were not included in the draft version published last year, such as the trustworthy AI assessment list and governance structure.
"The trustworthy AI assessment list, primarily addressed to developers and deployers of AI systems, is a useful checklist that organisations can use to ensure that AI systems are being used ethically and are robust," Jhakra said. "The list gives organisations a lot to think about and consider when developing, implementing and using AI systems, and may help them identify any gaps or areas for development."
The guidelines also offer a governance structure at operational and management level which sets out what various people within an organisation can be expected to do to implement the assessment list and manage risks that stem from the use of AI.
"Taken together, the trustworthy AI assessment list and organisational process help in making the guidelines a more practical tool for companies and organisations," Jhakra said. "The draft guidelines were highly focused on mapping the fundamental rights and principles of EU law which AI engages, and whilst this element is retained in the finalised guidance, the new features help bring those rights and principles to life by providing greater clarity over how they apply in practice."