Out-Law News

UK government publishes ‘Responsible AI in recruitment’ guidance


Kate Dodd tells HRNews about the role of HR in the procurement of AI systems
HR-News-Tile-1200x675pxV2

We're sorry, this video is not available in your location.

  • Transcript

    The government has published guidance on responsible AI in recruitment to help employers minimise the risk of introducing systems that perpetuate biases and discrimination in the recruitment process. We’ll speak to a D&I specialist about HR’s role in the procurement of AI for use in recruitment.

    The guidance spells out how AI is becoming increasingly prevalent in the HR and recruitment sector and how it’s essential that the procurement, deployment, and use of AI by employers adheres to the UK Government’s current AI regulatory principles. Those are the principles outlined in the government’s white paper ‘A pro-innovation approach to AI regulation’ which was published in March last year. The guidance isn’t mandatory, but those principles are enforced by various regulators, including the Equality and Human Rights Commission and the data protection regulator, the ICO, so this is certainly something for HR to be aware of. 
    Personnel Today covers this and highlights the section of the guidance dealing with procurement. It outlines what employers should consider before procuring an AI system, including asking what the problem is they are trying to solve and how AI can help address it, whether the AI systems on the market have the capabilities to help solve the problem, and whether employees will need training or additional resources to use the system.

    The CIPD, which was directly involved in the development of this guidance, emphasises the importance of AI–human collaboration, and it’s a theme that runs throughout the guidance. So, the danger comes when employers rely on systems they don’t fully understand and where there is insufficient human oversight of the Ai being used.

    We agree, human oversight is essential, and there is no reason why that human should not be an HR professional, especially if the AI in question is making people-related decisions. Kate Dodd heads up D&I consultancy Brook Graham and the risk of bias when using AI is one she has been flagging with clients consistently. Earlier Kate joined me by video-link to discuss it. I put it to Kate that HR needs to understand how the technology works before the business goes ahead and buys it: 

    Kate Dodd: “Yes, absolutely, and it's really interesting because when you see how AI is marketed towards HR, the key messages are efficiency, this is going to really make your HR function so much more efficient, it's going to take out all the boring admin, the legwork, and we can do this for you, but the worrying thing is that we see very, very little in that advertising literature about how AI can possibly deal with the problem of bias, and how important it is, of course, to avoid bias in HR processes.”

    Joe Glavina: “So, ideally, HR needs to be involved in the procurement process?” 

    Kate Dodd: “Yes, absolutely, and it's worrying, actually, to see the number of businesses where HR don't have a voice in that procurement process. So, this will often be led elsewhere in the business, maybe there is a sophisticated procurement team, often there will be a working group who are looking at AI and how it can be used within a business - very rare to have HR representation on that. So, one of the key things, I would say, that our clients should be doing is making sure that HR have a voice in that procurement and it's not left until ‘here you go HR, here's some great piece of software’ which they then have to use.”

    Joe Glavina: “I know you’ve had recent experience of where things can wrong Kate. What happened in that case, without mentioning any names?” 

    Kate Dodd: “Like many of my clients, this in this company was signed up to Disability Framework. What they had done is they had committed that if an individual met the minimum criteria for a role, if they had a disability that was recognised by the Equality Act, then they will be guaranteed an interview and that's a really common thing to have in the systems of businesses that are accredited for disability purposes. Now, what the client hadn't realised is that the AI that they were using to do their screening, it simply wasn't factored in. So, what they were finding is actually the individuals who should have been guaranteed an interview because that's what the company had publicly committed to do, were not being offered interviews and, of course, for them, the PR aspects of that will really, really worrying.”

    Joe Glavina: “So that’s a good example of bias in AI. So how do you avoid it?” 

    Kate Dodd: “I think ensuring HR involved in procurement is really important because HR is actually one of those things where you need the human touch. The purpose of HR is there to advocate for the people that work for that business so you need to have HR people involved in the procurement processes and you need to have HR people in there challenging saying, look, these are the things that we've signed up to as a business around diversity and inclusion, whether it's disability, whether it's something else, to say these are the commitments that we've made, and can you please guarantee that the AI that we're using is going to be able to take these things on board and, if not, then you need to have a workaround to that.”

    That guidance called ‘Responsible AI in recruitment’ was published on 25 March and is definitely worth reading. It’s set out in a very user-friendly way with examples of the sort of checks and balances the regulators will expect to see. We’ve put a link to it in the transcript of this programme for you.

    LINKS
    - Link to ‘Responsible AI in recruitment – Guidance’

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.