A new code of practice is to be developed in the UK to help organisations implement AI systems that can make decisions impacting people in a way that complies with data protection law.
Publication of the new AI and automated decision making (ADM) code was promised by the Information Commissioner’s Office (ICO), the UK’s data protection authority, in a new AI and biometrics strategy it has laid out.
The ICO said the code, to be published within the next year, will provide “clear and practical guidance on transparency and explainability, bias and discrimination and rights and redress, so organisations have certainty on how to deploy AI in ways that uphold people’s rights and build public confidence”. The code will have a statutory footing.
Currently, UK data protection law provides people with a general right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them. That right does not apply if the decision: is necessary for entering into, or performance of, a contract; is authorised by EU or member state law; or is based on the individual’s explicit consent.
The UK government last year set out proposed reforms to those rules, designed to make it easier for organisations to engage in automated decision making. The move is part of a wider push by the government to support AI use and development as an enabler of economic growth.
The revised ADM provisions are contained in the Data (Use and Access) Bill (DUAB). The Bill is in the final throes of its journey through parliament, having been delayed amidst an intense debate over whether new AI-related copyright protections should be written into the face of the legislation. The DUAB is expected to be finalised and enacted shortly.
In addition to a new AI and ADM code, the ICO has confirmed it will consult on updating its existing ADM guidance by this autumn. Those updates, it said, would reflect the reforms included in the DUAB.
Data protection law experts Jonathan Kirsop and Lauro Fava of Pinsent Masons welcomed the ICO’s announcements.
Kirsop said: “The existing provisions in data protection law regarding ADM are cumbersome and hard to navigate, so the changes envisaged in the DUAB – which will mean consent will not be required for so many potential use cases of ADM in future – are welcome because they will make it clearer how ADM can be applied in practice.”
“The ICO’s new code should hopefully then go further by providing organisations with practical examples and detailed guidance to further facilitate implementation in a responsible, transparent and compliant way and in turn engender more public trust in the data processing associated with ADM. This is important amidst the very real concerns some people have about use of AI in decision making that can have a material impact on their lives.”
On ADM, the ICO said it plans to scrutinise its use in recruitment – and that it would “publish findings and regulatory expectations, holding employers to account if they fail to respect people’s information rights”. It also said it will ensure high standards of ADM in central government.
On the prospect of the new AI code of practice more generally, Fava said: “Industry would welcome clear and practical guidance on the issues identified by the ICO, particularly on bias and discrimination. The key words here are ‘clear and practical’. These are knotty issues, and the guidance that is currently available – not just from the ICO – can be quite difficult to apply in practice.”
“For example, it is often difficult for businesses to strike the right balance between data minimisation and ensuring fairness by avoiding bias. AI applications are often designed to mimic human behaviour or to source information created by people, which may include subtle biases which AI developers and deployers will struggle to detect and ‘fix’. Intervention by developers and deployers can also result in the application not behaving in the way users expect it to,” he said.
Kirsop added that the ICO’s planned actions on AI are noteworthy in the context of UK government plans to introduce new AI regulation and its likely need to identify an authority to lead on the oversight and enforcement of those new rules.