Out-Law / Your Daily Need-To-Know

Singapore offers guidance on AI use

Out-Law News | 23 Jan 2019 | 4:56 pm | 2 min. read

A new guide to help organisations implement and use artificial intelligence (AI) ethically and responsibly has been released for consultation by authorities in Singapore.

The 'model AI governance framework' provides "a baseline set of considerations and measures for organisations operating in any sector to adopt", according to the consultation paper prepared by the Personal Data Protection Commission (PDPC) and Infocomm Media Development Authority (IMDA).

"It is generally acknowledged that the area of artificial intelligence is a fledgling one progressing at a breakneck speed," Bryan Tan, a technology law expert at Pinsent Masons MPillay, the Singapore joint law venture between MPillay and Pinsent Masons, the law firm behind Out-Law.com, said.

"Therefore, while the model framework is not exhaustive nor evergreen, it is nonetheless a good starting point for organisations developing AI decision-making tools," he said.

The framework was announced at the World Economic Forum in Davos, Switzerland, in the hope of it attracting global feedback, Singapore's minister for communications and information, S. Iswaran, said.

The new framework promotes "clear roles and responsibilities for the ethical deployment of AI", and contains further internal governance measures that organisations can adopt, including in respect of risk management and internal controls, to ensure "robust oversight" of their use of AI.

It also sets out a series of steps businesses can take to set a decision-making model for AI that best suits their objectives and corporate values, but which also factors in societal norms and values and risks to individuals. Businesses following the framework are advised to determine "the level of human oversight" in their "decision-making process involving AI" after classifying "the probability and severity of harm to an individual as a result of the decision made by an organisation about that individual".

The framework also contains recommendations on "good data accountability practices" to "ensure the effectiveness of an AI solution".

Businesses should take steps to understand "where the data originally came from, how it was collected, curated and moved within the organisation, and how its accuracy is maintained over time", address factors that impact on the quality of data, and counteract inherent bias in datasets that "may lead to undesired outcomes such as unintended discriminatory decisions", according to the framework.

Businesses are also encouraged to use different datasets for training, testing, and validation, and to periodically review and update the datasets used in AI solutions "to ensure accuracy, quality, currency, relevance, and reliability".

Algorithms used in AI systems should be "explainable" where possible, and if not then "the repeatability of results produced by the AI model" should be documented, and decision-making processes further documented "in an easily understandable way", according to the new guidance.

Businesses providing AI solutions are further urged to provide information about their AI use to customers in a bid to build their trust in those solutions.

Companies could achieve this include by "disclosing the manner in which an AI decision may affect the individuals, and if the decision is reversible", making "meaningful summaries" of ethical evaluations carried out in relation to their AI solutions available, offering customers the option to 'opt-out', for example, according to the framework.

Businesses have been encouraged to pilot the use of the framework when implementing AI and feedback their experiences by 30 June. The (IMDA) said the framework is "a living document, intended to be agile in evolving with the fast-paced changes in a digital economy and expected to continue to develop alongside adoptees use".

Minister Iswaran said the framework "helps translate ethical principles into pragmatic measures that businesses can adopt".

"Where AI is concerned, there are big questions to be answered, and even bigger ones yet to be asked," Iswaran said. "The model framework may not have all the answers, but it represents a firm start and provides an opportunity for all – individuals and organisations alike – to grapple with fundamental ideas and practices that may prove to be key in determining the development of AI in the years to come."