Out-Law News

ICO to investigate discriminatory AI in recruitment


Anne Sammon tells HRNews why HR needs to understand the AI systems implemented by the business 
HR-News-Tile-1200x675pxV2

We're sorry, this video is not available in your location.

  • Transcript

    The data watchdog, the ICO, is set to investigate whether employers using artificial intelligence in their recruitment systems could be discriminating against underrepresented groups. John Edwards, the information commissioner, has announced plans for an inquiry into the automated systems that screen job candidates, including looking at employers’ evaluation techniques and the AI software they use.

    Over recent years, concerns have mounted that AI may discriminate against minority groups because of the speech or writing patterns they use. Edwards said his plans over the next three years would consider ‘the impact the use of AI in recruitment could be having on neurodiverse people or ethnic minorities, who weren’t part of the testing for this software’. He said the ICO would, in due course, be issuing fresh guidance for AI developers and employers.

    Personnel Today reports on this highlighting both the positive and negative sides of using AI in recruitment. So, technology can help remove management biases and prevent discrimination, but, equally, it can have the opposite effect because the algorithms themselves can amplify human biases. We have already seen a number of examples of that. Earlier this year Estée Lauder faced legal action after two employees were made redundant – a decision made using an algorithm. Last year, facial recognition software used by Uber was criticised for having a racist effect. Back in 2018, Amazon scrapped a trial of a recruitment algorithm that was discovered to be favouring men and rejecting applicants on the basis they went to female-only colleges.

    John Edwards’ statement comes as no surprise - the ICO has been planning new guidance in this area for some time and it’s something Anne Sammon was flagging up in August last year in her Out-Law article ‘Flexibility, digital and diversity issues to shape UK workplace in 2022’. The Equality and Human Rights Commission has also said it will provide guidance on how the Equality Act applies to the use of AI. They say they want to work with employers ‘to make sure that using artificial intelligence in recruitment does not embed biased decision-making in practice’. Those plans were outlined by the EHRC in its draft strategic plan for the next three years.

    So, clearly there is already a strong focus on the use of AI in recruitment and it is an issue that is definitely moving up the HR agenda. To help understand why that is, and HR’s role, I caught up with Anne Sammon who joined me by video-link to discuss it:

    Anne Sammon “I think one of the really challenging things about artificial intelligence is its complexity and it's very easy to go into situations not fully understanding how the technology works and it’s so important that employers really do have a good understanding of what the technology is actually doing and how it's working so that they can help to identify if there are potential discrimination issues. Without that sort of knowledge of how the product works is very difficult to take mitigating steps to alleviate any disadvantage that the technology might be causing.”

    Joe Glavina: “In your Out-Law article you say that before implementing AI tool it’s vital employers do some due diligence. What do you mean by that?”

    Anne Sammon: “So I think there are two steps. The first step is the kind of procurement of that artificial intelligence tool and it's about making sure that the HR teams have the confidence to ask the right questions and don't allow themselves to be bamboozled by technological language. So, I think that's the first piece, asking questions so that you understand how it works and what it does is a key part of this. Then there's a separate piece about how you communicate that to the individuals who are subject to that technology. So, for example, if it's a recruitment exercise, I would expect HR teams to be talking to candidates about the technology that's being used, how and how it's being used, so that if there were questions raised by those candidates, in terms of potential disadvantage, the HR teams can respond accordingly.”

    Joe Glavina: “You also mention in your Out-Law article that there are ethical issues around the use of AI. Why should HR take notice of that?

    Anne Sammon: “I think that that there is the natural kind of human tendency to be interested in those issues. I also think that, from an HR perspective, having an awareness of what those issues are is quite important so that if employees, or potential recruits, challenge the use of AI on those bases the HR team are equipped to be able to provide reasoned, proper, responses rather than feeling kind of hijacked by those questions.”

    Anne’s article on this explores all of that in more detail, if you’re interested in that. It is called ‘UK employers can expect wave of new data and AI guidance’ and is available from the Out-Law website.

    LINKS

    - Link to Out-Law article: ‘UK employers can expect wave of new data and AI guidance’

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.