The ICO has published new guidance to help employers use AI and personal data appropriately and lawfully. It aims to reinforce the seven principles of the UK’s data privacy regime, in particular data minimisation, transparency and accountability, building on previous guidance issued by the ICO. The central message remains the same - that AI offers businesses huge potential, but it must be deployed appropriately.
A reminder. Artificial intelligence, AI, is an umbrella term for technologies that perform tasks usually associated with humans. Through a process of ‘machine learning’ computers are able to perform specific tasks intelligently, by learning from data rather than ‘safer’ pre-programmed rules. But therein lies the problem: data and AI are inextricably linked, and that’s where things can go badly wrong, hence the guidance.
HR’s role in this arena is spelt out clearly by the CIPD. They say:
‘It’s crucial for employers to put HR at the centre of technology implementation decisions and to involve employees directly. This will ensure that employees have a meaningful voice on matters affecting them, including the ways in which their job roles could be augmented or changed by technology.’
People Management looked at this recently in: ‘How big a role should technology play in HR?’ exploring the use of automation in people management. So, on the plus side, they say technology can help give more time and resources for HR teams to focus on work that develops people and builds culture and community, freeing up HR professionals to deal with human emotions, responses and personalities, which are too complex for machines to analyse and react to appropriately. However, they warn:
‘The key to success is automating the right processes and defining when it is time for humans to step in. Automating the wrong processes, or processes that use out-dated data and approaches, could undo all the good work HR teams have done on wellbeing, diversity and inclusion.’
Once you’ve made the choice to go with AI, the ICO’s new guidance advocates taking ‘a risk-based approach when developing and deploying the technology. They say: ‘AI is generally considered a high-risk technology and there may be a more privacy-preserving and effective alternative’.
So, let’s consider that. The standard way to assess risk is to conduct a DPIA, a data protection impact assessment, but when it comes to using AI in the workplace is it actually a legal requirement and something HR should be doing as standard practice, or not? So, is it a ‘nice to have’ or ‘a necessity’? It’s a question I put to data protection specialist, Harriet Dwyer:
Harriet Dwyer: “It’s most likely going to be a necessity. So, artificial intelligence technology is recognised by the ICO as high risk. That doesn't mean to say that it's always going to be high risk, but this does need some thought. Basically, where a processing activity is considered as high risk then a data protection impact assessment must be carried out and that is essentially a risk assessment in which you identify what the processing activity is, what you're trying to achieve by implementing it, and what the risks might be to the individual and then, where there are risks, thinking about any sort of mitigating measures you can implement to reduce those risks. Now, what people, wrongly, assume is that the data protection requirements really restrict people from carrying out certain types of processing and that's not necessarily the case. So, data protection laws don't require us to completely eliminate risk, but it just requires us to sufficiently mitigate against it and DPIA is a really useful tool when we're thinking about using AI technology to process personal data to help identify what those risks are, and to ensure that we can sufficiently mitigate against them.”
Joe Glavina: “The ICO warns about the risk of discrimination when machines are used to make decisions and there is an obvious risk in the context of recruitment. What’s the advice you’re giving around that, Harriet?”
Harriet Dwyer: “So, the starting point with this is that the GDPR actually provides a right to individuals not to be subjected to automated decision-making where it is solely based on automation. So, by that I mean no human intervention. The right also applies in the context of where that decision-making has either a legal effect, or a significant effect, on the individual. So, in the context of recruitment you can obviously see that if an automated machine is making a decision about whether or not someone is going to get a job, or going to get shortlisted, this will have a significant effect on the individual, so the starting point is that it must not be done automatically and there must be some human intervention there. So, where employers and organisations are thinking about introducing AI technology in the context of recruitment, and in fact, probably, in the context of an employment relationship generally, so it might be AI technology that helps some kind of decision-making process such as a redundancy situation or even a disciplinary situation. Managers need to be completely aware of the AI technology that is being used in the first instance, they need to understand how it works and there needs to be a process in place for that to be reviewed by the managers and those involved in the recruitment or decision-making processes as well so that they don't fall foul of that right not to be subjected to automated decision-making.”
Joe Glavina: “I know that some HR teams have shied away from AI given its complexity and the data protection implications of using it. Do you think this guidance helps to give some degree of confidence?”
Harriet Dwyer: “I think what the guidance demonstrates is actually the ICO recognises that there are many benefits to implementing artificial intelligence technology and what we have seen is a hesitation in that regard because of the concerns around how it conflicts with data protection. But I think the guidance really demonstrates that the ICO are aware of this and they're trying to help organisations to become much more effective and efficient, and innovative, and provided that organisations are applying artificial intelligence well, and in accordance with those obligations, there is no reason, really why we should be taking a step back from being innovative in these ways.”
The ICO’s new guidance is called ‘How to use AI and personal data appropriately and lawfully’ and has been published on the ICO’s website, so you can download it from there. We have put a link to it in the transcript of this programme.