Out-Law / Your Daily Need-To-Know

Out-Law News 3 min. read

ICO: data protection principles apply to AI use


New guidance issued by the UK's Information Commissioner's Office (ICO) confirms that data protection principles and laws that apply when processing personal data are equally applicable when the processing is carried out via artificial intelligence (AI) systems, a technology law expert has said.

Priya Jhakra of Pinsent Masons, the law firm behind Out-Law, said the draft guidance on a new AI auditing framework that the ICO is consulting on, is useful because it shows how existing data protection principles should and can be applied to AI. Jhakra said the guidance also includes good practice recommendations which may help controllers and those processing personal data to comply with their legal obligations under data protection law when using AI systems.

Like the European Commission in its recent white paper on AI, the ICO has endorsed a risk-based approach to regulation and compliance, with its guidance addressing areas such as accountability and governance, fair, lawful and transparent processing of data, data minimisation and security, and the exercise of individuals' rights under data protection law in the context of AI systems.

"It is unrealistic to adopt a ‘zero tolerance’ approach to risks to rights and freedoms, and indeed the law does not require you to do so – it is about ensuring that these risks are identified, managed and mitigated," the ICO said.

The data protection authority explained that the use of AI entails trade offs between privacy and competing rights and interests. This, it said, includes a trade-off between privacy and statistical accuracy.

Addressing the topic, the ICO said businesses can comply with their obligations on data accuracy under data protection laws when using AI systems to process personal data even if the system's outputs are not completely "statistically accurate". However, it said "the more statistically accurate an AI system is, the more likely that your processing will be in line with the fairness principle".

"Fairness, in a data protection context, generally means that you should handle personal data in ways that people would reasonably expect and not use it in ways that have unjustified adverse effects on them," it said.

The ICO said that the use of AI must be necessary, proportionate and for a legitimate purpose. It said it is not sufficient for a data controller to use AI simply because the technology is available – businesses need to evidence that they cannot accomplish the purposes of their data processing in a less intrusive way.

Jhakra said: “These are not new principles, and they should be applied to AI as with any other processing of personal data. However, with AI more thought is required to comply with these principles due to the specific nature of the risks posed by using AI, and the way personal data can be processed by AI."

In its guidance the ICO has also emphasised that it is a legal requirement for organisations to undertake a data protection impact assessment when seeking to use AI systems. This is because processing personal data using AI is deemed "likely to result in high risk to individuals’ rights and freedoms" and so triggers the DPIA requirements under the General Data Protection Regulation (GDPR). Businesses might want to produce two versions of a DPIA for different audiences, it said.

"A DPIA should identify and record the degree of any human involvement in the decision-making process and at what stage this takes place," the ICO said. "Where automated decisions are subject to human intervention or review, you should implement processes to ensure this is meaningful and also detail the fact that decisions can be overturned."

"It can be difficult to describe the processing activity of a complex AI system. It may be appropriate for you to maintain two versions of an assessment, with: the first presenting a thorough technical description for specialist audiences; and the second containing a more high-level description of the processing and explaining the logic of how the personal data inputs relate to the outputs affecting individuals," it said.

On accountability, the ICO endorsed the principle of data protection by design, as well as the upskilling and training of staff. The ICO said it is also developing a more general accountability toolkit to help organisations comply with the GDPR.

The ICO has also offered guidance on when, in the context of using AI, organisations are considered to be a data 'controller' or a 'processor' under data protection law. Controllers and processors are subject to different obligations under the GDPR.

The ICO said organisations need to clearly identify who is a controller and who is a processor at the outset.

Organisations that decide the source and nature of the data used to train an AI model, the target output of the model, the broad kinds of machine learning algorithms that are used to create models from the data, and how models will be continuously tested and updated, will likely be controllers, the ICO said.

In the context of AI, processors might decide the specific implementation of generic algorithms, and how the data and models are stored, the ICO said.

The draft guidance, open to consultation until 1 April, is aimed at technology specialists as well as individuals in compliance-related roles. Examples provided by the ICO include machine learning developers and data scientists, software developers and IT risk managers, as well as data protection officers and general counsel.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.