Out-Law Analysis | 22 Feb 2019 | 9:04 am | 4 min. read
Those developing the ethics guidelines should look to Singapore for inspiration on how they might best support the responsible and legally compliant adoption of AI.
AI offers the potential for more targeted data-driven interventions and solutions in sectors such as health care and financial services, but it also presents new risks of potential consumer detriment if it is implemented without appropriate testing and safeguards. This is being increasingly recognised by policy makers globally.
It is against this backdrop that policy makers in the EU, Singapore and Dubai have recently moved to draw up guidelines to help shape the way businesses deploy AI.
A high-level expert group on artificial intelligence (AI HLEG), appointed by the European Commission and made up of 52 academics and representatives from businesses and civil society groups, published draft guidelines late last year.
Singapore's government chose the World Economic Forum in Davos in January 2019 to release a new governance framework for implementing and using AI ethically and responsible that was developed by the Personal Data Protection Commission and Infocomm Media Development Authority.
Smart Dubai, the body driving digital transformation in the city of Dubai, also released its own guidelines on ethical use of AI in January, following consultation with a range of local regulators, including those responsible for telecoms, health and the various utilities.
Each set of guidelines take a different approach, but they share common themes in relation to the use of AI and AI decision making:
Both the EU and Singapore guidelines state that AI should be "human-centric" - the interests of individuals, including their wellbeing and safety, should be protected when developing and using AI.
The draft EU guidelines focus on setting out fundamental rights, principles and values that are relevant to the use of AI. These have their origins in the rights and principles enshrined in EU law and include:
Broad requirements that AI systems should meet are set out in the guidelines. These include high-level standards on accountability, governance, design for all, governance of AI autonomy including human oversight, non-discrimination, respect for human autonomy, respect for privacy, robustness, safety and transparency. The implementation of technical and non-technical methods is recommended to meet these requirements.
However, the guidelines do not give offer much detail on what the technical and non-technical measures that should be adopted are. Instead, the recommendations on the actions businesses should consider are broad. These include:
The guidelines offer little practical guidance for businesses and users of AI in areas such as audit of algorithms, staff training, customer relationship management, and the use of data sets, for example.
AI HLEG has defined AI as "systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions."
"As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimisation), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems)," it said.
Like with the EU guidelines, the Singapore guidelines recognise the importance of fundamental rights, but they also set out a series of steps businesses can take to develop internal governance measures and processes that provide for responsible and ethical use of AI.
The guidelines detail and explain how businesses using AI can:
The guidelines also delve into four issues specific to the operation of AI:
The Dubai guidelines take a principles-based approach to ethical use of AI, promoting high-level concepts such as transparency, for example. Like the EU guidance, however, it lacks clear measures for those implementing AI to adopt.
The Smart Dubai self-assessment toolkit does, however, provide practical examples and case studies which help put the principles into context and may give organisations exploring AI a better understanding of the types of measures they could implement to ensure their use of the technology is ethical.
While the draft EU guidelines offer a useful insight into the principles that should shape the use of AI, businesses would benefit from more practical guidance.
Practical guidance for businesses should look to fill in the legal, regulatory, and operational gaps which relate to ethics, such as obtaining consent to the processing of personal data, individuals' right to be informed, and compliance with the General Data Protection Regulation (GDPR) more generally.
It is incumbent on the AI HLEG, and European Commission in considering its work, to look at the framework developed in Singapore and to how it might be adapted to assist EU businesses exploring the potential of AI. The finalised EU guidelines are due to be released in April.
Priya Jhakra is an expert in AI in the financial services sector at Pinsent Masons, the law firm behind Out-Law.com.