Out-Law / Your Daily Need-To-Know

Out-Law News

AI in HR decision-making brings discrimination risk


Lisa Byars tells HRNews how HR can manage the risks of using AI to manage people
HR-News-Tile-1200x675pxV2

We're sorry, this video is not available in your location.

  • Transcript

    Could using machine to make decisions in the workplace be a discrimination risk? If so what, if anything, should HR be doing about it? This has been in the news a lot since publication of the TUC’s report into the use of AI and its impact on workers. They are concerned about machines making ‘life changing’ decisions without enough accountability. They argue the use of automation across HR is often shrouded in mystery and is difficult for employees to challenge. 

    People Management covers this and quotes Frances O’Grady, general secretary of the TUC, who says that without ‘fair rules’ the use of AI could lead to widespread discrimination and unfair treatment, especially for those in insecure work and the gig economy. She argues that every worker must have the right to have AI decisions reviewed by a human manager. The union is calling for legal reform for the ethical use of AI at work including a new legal duty on employers to consult trade unions on the use of “high risk” forms of AI in the workplace. They want a legal right for workers to have a human being review decisions made by AI systems and they want amendments to the GDPR and the Equality Act to guard against discriminatory algorithms. 

    To some degree the case for AI is made by Nobel prize-winning author and psychologist Professor Daniel Kahneman. He was speaking to Radio 4’s Start the Week programme on Monday about decision-making in the workplace and the way people and machines make judgments. As Personnel Today reports, he talks about the concept of “noise” which is his term for the variability in decisions made by people and he argues we see far too much of it. He says machines, which are here to stay, use algorithms and simple rules and are noise free, reliable and consistent. If the decisions they make are biased then it is because they have been trained on biased sets of data and that is the fault of a human being, not the machine. He says humans are both biased and ‘noisy’ and therein lies the challenge for HR – finding a way to successfully marry up machines with humans to produce consistent and fair outcomes. Essentially, it is the failure of employers to do that which concerns the TUC.

    The TUC’s report is very detailed, running to well over 100 pages. The headline is ‘the use of AI in HR decision making could lead to widespread discrimination’ so let’s consider that. Lisa Byars is currently helping a number of clients to address the risks as they introduce automation to their systems. She joined me by video-link from Aberdeen to discuss it:

    Lisa Byars: “Well, it's quite an interesting one because we do get a lot of employers, clients, coming to us now because there is an increased use of this automated technology and I think that a lot of employers wrongly believe that they are maybe less immune to discrimination claims because a computer is making these decisions. The issue that we find is actually it's the lack of human involvement actually that can lead to the risk of discrimination claims. So, for example, these automated technology systems rely on the data that's inputted into them and a computer is not equipped to identify and consider, or assess, the impact of the data, or even the impact of the system on individuals who have a protected characteristic because only humans really can do that effectively.”

    Joe Glavina: “Can you tell me about the potential claims that might arise when you have an over-reliance on machines when managing people, because the TUC is right, discrimination is a risk.”

    Lisa Byars: “Absolutely. The main claims that you'd be looking at would be claims of direct discrimination, indirect discrimination, and you're looking at claims that could come from employees, prospective employees and workers. These claims are going to be costly to defend for one, there is uncapped compensation and we find employers sometimes forget there is the reputational damage if they are found to be using a system without any assessment that does discriminate against individuals of a protected group.”

    Joe Glavina: “So what can employers do about it Lisa. What are the questions HR needs to be asking?” So, I see the problem but how do you fix it?

    Lisa Byars: “That is interesting and that is one of the key points, I suppose. I think the employer needs to start by considering do we use these automated technologies, one? If so, where do they use them and they need to be asking right from the start why are they using it and is it necessary? Also, looking at the data that they're using, is somebody assessing that data? Where are they getting the data from? Identifying all the problematic areas, I suppose, where they think the problems could arise and introducing human review into that element, that aspect of the process, to ensure that it has been impact assessed. Are the results of the automatic decision making being reviewed? Also, importantly, and one thing that employers sometimes do forget and we do remind him clients of, is the fact that employees that are carrying out the assessment really need to have the proper training to be able to identify the discrimination risks that are key here.”

    Joe Glavina: “So in summary, Lisa, what do you want HR to take away from this?”

    Lisa Byars: “It’s really just getting the key messages across to clients and key message number one is ensure that the clients identify the potential problem stages when using automated decision making, introducing human review at every stage of that problematic process, and invest and ensure that those involved at those points in carrying out those assessments have the training and are equipped to address and remedy any issues that are identified.”

    That report on the use of AI which was commissioned by the TUC is called ‘Technology Managing People – the legal implications’ and is written by Robin Allen QC and Dee Masters from Cloisters Chambers. We have put a link to it in the transcript of this programme.

    LINKS

    - Link to TUC report: ‘Technology Managing People – the legal implications’
    Technology_Managing_People_2021_Report_AW_0.pdf (tuc.org.uk)

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.