UK employers can expect a wave of new data and AI guidance in the coming weeks. Guidance on monitoring workers and on using artificial intelligence tools in recruitment is to be issued to employers in the UK under plans announced by two of the UK’s regulators.
One of the regulators is the Information Commissioner’s Office. The ICO has previously published detailed employment practices guidance, including the employment practices code, supplementary guidance and the quick guide. It is now planning to replace that with a new, more user-friendly online resource with topic-specific areas. The ICO says it wants to make sure that the new guidance addresses the changes in data protection law and reflects the changes in the way employers use technology and interact with staff. They intend addressing the processing of personal data in the context of recruitment, selection and verification, employment records, monitoring at work and workers’ health, as well as data processing in the context of TUPE.
Separately, the Equality and Human Rights Commission has said it will provide guidance on ‘how the Equality Act applies to the use of new technologies in automated decision-making’ and says it will be working with employers ‘to make sure that using artificial intelligence in recruitment does not embed biased decision-making in practice’. Those plans were outlined by the Commission in its draft strategic plan for 2022 to 2025.
Katy Docherty and Anne Sammon have been commenting on this for Outlaw. Katy says the Covid pandemic has brought into sharp focus the need for clarity on the processing of workers’ health data – hence the ICO’s plans for new guidance. She says it has also accelerated the trend to remote working and the use of employee monitoring technology – hence the Commission’s plans for guidance on that.
So let’s hear more about both sets of guidance and their relevance to HR. Anne Sammon joined me by video-link to discuss the issues. I started by asking Anne about the two regulators and what they are planning:
Anne Sammon: “Well the ICO already has some existing guidance around using artificial intelligence but there has been some concern in the past that that maybe is difficult to understand and to properly implement and so the ICO has said that it's going to try and produce that in a more user friendly way and this is particularly important given that artificial intelligence seems to becoming more common and we seem to be seeing more instances of it. So it's really important that the employers and other organisations understand what their obligations are in relation to that from a data privacy perspective. In relation to the Equality and Human Rights Commission piece, their real concern is about automated decision making. So they're trying to ensure that where organisations are using artificial intelligence in a way to make decisions about individuals that those decisions aren't tainted by discrimination.”
Joe Glavina: “I can imagine HR being involved in the purchasing of AI, but what about implementing it? Is that something they should be concerned with?”
Anne Sammon “I think one of the really challenging things about artificial intelligence is its complexity and it's very easy to go into situations not fully understanding how the technology works and it’s so important that employers really do have a good understanding of what the technology is actually doing and how it's working so that they can help to identify if there are potential discrimination issues. Without that sort of knowledge of how the product works is very difficult to take mitigating steps to alleviate any disadvantage that the technology might be causing.”
Joe Glavina: “In your Outlaw article you say that before implementing AI tool it’s vital employers do some due diligence. What do you mean by that?”
Anne Sammon: “So I think there's two steps. The first step is the kind of procurement of that artificial intelligence tool and it's about making sure that the HR teams have the confidence to ask the right questions and don't allow themselves to be bamboozled by technological language. So I think that's the first piece, asking questions so that you understand how it works and what it does is a key part of this. Then there's a separate piece about how you communicate that to the individuals who are subject to that technology. So for example, if it's a recruitment exercise, I would expect HR teams to be talking to candidates about the technology that's being used, how and how it's being used, so that if there were questions raised by those candidates, in terms of potential disadvantage, the HR teams can respond accordingly.”
Joe Glavina: “You also mention in your Outlaw article that there are ethical issues around the use of AI. Why should HR take notice of that?
Anne Sammon: “I think that that there is the natural kind of human tendency to be interested in those issues. I also think that, from an HR perspective, having an awareness of what those issues are is quite important so that if employees, or potential recruits, challenge the use of AI on those bases the HR team are equipped to be able to provide reasoned, proper, responses rather than feeling kind of hijacked by those questions.”
Back in August the ICO opened a consultation on their plans for new guidance. They want to hear from stakeholders, including employers of course, so it’s a chance to have your say and potentially shape their products. That consultation runs until 21 October and we have put a link to their online survey in the transcript of this programme.
- Link to ICO consultation on employment practices
ICO call for views on employment practices | ICO