Those developing the ethics guidelines should look to Singapore for inspiration on how they might best support the responsible and legally compliant adoption of AI.
Shaping the use of AI
AI offers the potential for more targeted data-driven interventions and solutions in sectors such as health care and financial services, but it also presents new risks of potential consumer detriment if it is implemented without appropriate testing and safeguards. This is being increasingly recognised by policy makers globally.
It is against this backdrop that policy makers in the EU, Singapore and Dubai have recently moved to draw up guidelines to help shape the way businesses deploy AI.
A high-level expert group on artificial intelligence (AI HLEG), appointed by the European Commission and made up of 52 academics and representatives from businesses and civil society groups, published draft guidelines late last year.
Singapore's government chose the World Economic Forum in Davos in January 2019 to release a new governance framework for implementing and using AI ethically and responsible that was developed by the Personal Data Protection Commission and Infocomm Media Development Authority.
Smart Dubai, the body driving digital transformation in the city of Dubai, also released its own guidelines on ethical use of AI in January, following consultation with a range of local regulators, including those responsible for telecoms, health and the various utilities.
Each set of guidelines take a different approach, but they share common themes in relation to the use of AI and AI decision making:
Both the EU and Singapore guidelines state that AI should be "human-centric" - the interests of individuals, including their wellbeing and safety, should be protected when developing and using AI.
The draft EU guidelines
The draft EU guidelines focus on setting out fundamental rights, principles and values that are relevant to the use of AI. These have their origins in the rights and principles enshrined in EU law and include:
- respect for human dignity;
- freedom of the individual;
- respect for democracy, justice and the rule of law;
- equality, non-discrimination and solidarity;
- citizens’ rights.
Broad requirements that AI systems should meet are set out in the guidelines. These include high-level standards on accountability, governance, design for all, governance of AI autonomy including human oversight, non-discrimination, respect for human autonomy, respect for privacy, robustness, safety and transparency. The implementation of technical and non-technical methods is recommended to meet these requirements.
However, the guidelines do not give offer much detail on what the technical and non-technical measures that should be adopted are. Instead, the recommendations on the actions businesses should consider are broad. These include:
- making trustworthy AI part of an organisation’s culture, such as by implementing principles for trustworthy AI into a code of conduct;
- providing information to stakeholders;
- ensuring traceability;
- facilitating auditability of AI systems;
- ensuring there is a specific process for accountability governance;
- training and education
The guidelines offer little practical guidance for businesses and users of AI in areas such as audit of algorithms, staff training, customer relationship management, and the use of data sets, for example.
Unlike with the guidelines developed in Singapore and Dubai, however, the AI HLEG has attempted to define AI. The definition builds on an earlier definition the European Commission set out last year.
AI HLEG has defined AI as "systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions."
"As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimisation), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems)," it said.
The Singapore guidance
Like with the EU guidelines, the Singapore guidelines recognise the importance of fundamental rights, but they also set out a series of steps businesses can take to develop internal governance measures and processes that provide for responsible and ethical use of AI.
The guidelines detail and explain how businesses using AI can:
- improve on their internal governance structures and measures;
- put in place risk management and internal controls;
- determine an AI decision making model, for example by first deciding on their commercial objectives of using AI, such as ensuring consistency in decision making, improving operational efficiency and reducing costs, or introducing new product features to increase consumer choice
The guidelines also delve into four issues specific to the operation of AI:
- data used for AI models – the guidelines look at ensuring data quality, the minimisation of inherent bias in data sets, the use of different data sets for training, testing and validation, and the review and update of data sets;
- algorithms – transparency of algorithms used in AI models and their 'explainability' - the guidelines identify that this relates to whether we can explain how an algorithm functions and has arrived at a particular outcome.
- customer relationship management – the guidelines discuss how we can increase transparency in AI decision making, create policies to explain AI outcomes to consumers, how to manage a human-AI interface, general disclosure, a consumer's option to opt-out of interacting with AI, and feedback and decision review channels.
- audit – the guidelines outline factors to be considered when auditing algorithms.
The Dubai guidelines take a principles-based approach to ethical use of AI, promoting high-level concepts such as transparency, for example. Like the EU guidance, however, it lacks clear measures for those implementing AI to adopt.
The Smart Dubai self-assessment toolkit does, however, provide practical examples and case studies which help put the principles into context and may give organisations exploring AI a better understanding of the types of measures they could implement to ensure their use of the technology is ethical.
Follow Singapore's lead
While the draft EU guidelines offer a useful insight into the principles that should shape the use of AI, businesses would benefit from more practical guidance.
Practical guidance for businesses should look to fill in the legal, regulatory, and operational gaps which relate to ethics, such as obtaining consent to the processing of personal data, individuals' right to be informed, and compliance with the General Data Protection Regulation (GDPR) more generally.
It is incumbent on the AI HLEG, and European Commission in considering its work, to look at the framework developed in Singapore and to how it might be adapted to assist EU businesses exploring the potential of AI. The finalised EU guidelines are due to be released in April.
Priya Jhakra is an expert in AI in the financial services sector at Pinsent Masons, the law firm behind Out-Law.com.