HR should be ‘wary’ of AI in decision-making

Out-Law News | 18 May 2021 | 8:41 am |

Steph Paton tells HRNews about the legal risks of using algorithms to manage people
HR-News-Tile-1200x675pxV2

We're sorry, this video is not available in your location.

  • Transcript

    Are you monitoring the extent to which machines are making decisions in your business? The use of AI in HR decision-making is increasing fast and it is something HR needs to be alive to.

    Personnel Today covers this with a good illustration of the problem. This is the case of the Uber drivers whose contracts were terminated unfairly as a result of technology-driven decisions. Six drivers were wrongly accused of fraudulent activity based on incorrect information produced by the technology. A court in Amsterdam ordered Uber to pay more than 100,000 Euros in damages and ordered that the drivers should have their contracts reinstated – Uber’s HQ is based in the Netherlands, hence the Dutch court. Five of the drivers are based in the UK and the sector press has been following the litigation. IT Pro explains how the decisions to dismiss the drivers were based on incorrect assessments made by an algorithm. The UK drivers were represented by the App Drivers & Couriers Union, which argued that technology inside Uber's driver app, used to track drivers and verify their locations, had incorrectly flagged drivers for "fraudulent activity" without explanation.

    AI is often used in recruitment too, to do the sifting.  The BBC reports on the computers rejecting job applications and reports how, back in 2018, Amazon scrapped its own AI system because it showed bias against female applicants. It gave job candidates scores ranging from one to five stars - much like shoppers rate products on Amazon. However, over time it essentially had ‘taught itself that male candidates were preferable’ because they more often had greater tech industry experience on their resume. Amazon declined to comment but said whilst the tool was flawed it was never actually used by Amazon to evaluate candidates.

    We have noticed more and more businesses are now looking to introduce AI-based models to their recruitment and general employment processes and it’s an area which is expanding fast. We are currently advising a number of clients on the risks associated with an over-reliance on machines and how to minimise them. One of the lawyers who is helping HR with this is Steph Paton who joined me by video-link from Leeds. She had this advice for employers:

    Stephanie Paton: “So, firstly employers should always be carefully planning the introduction of their AI and so when doing so they should really be keeping employees’ rights and interests as an essential consideration, particularly the rights to data protection, to privacy, and also their general health and well-being and that should really help employers to ensure that they're introducing this technology in a way that's lawful and also proportionate. Tied into that point is ensuring that employers are providing really clear and readily accessible information to employees so that they know what technology they're being subject to at any one time and how it will be used to make decisions that will impact them. Finally, just another important point to mention on this topic is that, given the sensitivity of the types of decisions that are made in HR, it really is crucial to maintain at least some degree of human input involvement or human review to scrutinisethe output of automated decision making, particularly when it comes to decisions about hiring and firing employees.”

    The TUC has commissioned a report into the use of AI and its impact on workers – they are concerned about machines making ‘life changing’ decisions without enough accountability. They argue the use of automation across HR is often shrouded in mystery and is difficult for employees to challenge. The report was produced by lawyers - the AI Law Consultancy - and it flags, among dozens of issues, an inherent distrust in AI due, in a large part, to the fact the systems are introduced solely for the employer’s benefit, and often with very little communication with the staff. Steph Paton again:

    Stephanie Paton: “Well as the use of AI technology continues to grow more and more into the workplace it has definitely been met with a level of fear and distrust by employees, especially around how these technologies will benefit them as opposed to just benefiting employers and I think largely that comes from the fact that generally these technologies are poorly understood. So that is a key challenge for HR and, again, our advice is that this should be tackled head on by adopting the approach of, firstly, transparency and collaboration with employees. There should always be a two way dialogue about these technologies with the aim of building a kind of culture of openness and to promote AI as a tool to really help and benefit employees rather than controlling them or replacing them.”

    That report on the use of AI which was commissioned by the TUC is called ‘Technology Managing People – the legal implications’ and is written by Robin Allen QC and Dee Masters from Cloisters Chambers. It is well worth reading if you have the time. We have put a link to it in the transcript of this programme.

    LINKS
    - Link to TUC report: ‘Technology Managing People – the legal implications’