Artificial intelligence is increasingly being deployed by businesses and HR input is crucial to its success. That is the central message in the Outlaw analysis piece by Aisleen Pugh: ‘Artificial intelligence in the workplace: implications for HR professionals.’
She says AI is a challenging and complex subject, but HR professionals should consider specialising in this field in order to advise in a meaningful way on how AI affects the workforce, especially the potential HR risks from using AI applications. She says the harms from AI are often unintentional but can include: ‘bias and discrimination, unfair treatment, misuse of employee data and privacy and the possible negative effects to wellbeing as AI applications reduce the need for human interaction.’ She says HR should be involved from the outset as part of a governance team, involved in purchasing the technology as well as its implementation.
So, let’s consider both of those, the purchase and implementation, with the help of two of our lawyers, Katy Docherty and Anne Sammon who joined me by video-link. Katy Docherty is a data protection specialist. I asked her to explain the issue here:
Katy Docherty: “The issue here is that quite often we find that technology somewhat outpaces law and outpaces regulation, and it may be that the technology that is available to employers for various purposes, involving their employees or their customers, actually has technological capabilities that might in practice not be lawful under data protection legislation if you were to use that technology to its fullest capability. So, for example, new technology may allow for more intrusive monitoring of employees than if you were to carry out a data protection assessment of activity which would, in fact, be lawful. So, one of the key things for companies to look out for when they're researching and buying new technologies is really whether they are buying technology that is capable of doing more than they think they can lawfully do and being careful that they don't go out of the bounds of lawful processing just because technology is able to carry out a particular type of monitoring or a particular type of data processing. I think, probably, there is quite good scope for HR, or for those with data protection responsibility in an organisation, to be involved in that initial scoping and researching process when employers are looking at purchasing this technology for that reason.”
Once bought, the technology will need to be implemented in a way that is proportionate bearing in mind the discrimination risk. Last week People Management ran an article on the misuse of employee surveillance technology which various experts warning how the monitoring of workers’ emails, phones or webcams may not only damage trust but also put employers in legal hot water. They quote the TUC’s Frances O’Grady who wants to see a new statutory duty requiring employers to consult with trade unions before introducing automated decision-making systems, along with a right to have high-risk decisions reviewed by humans.
Given the nature of the risk, it seems obvious that HR needs to be involved at the implementation stage if it’s what we might call ‘decision-making technology’. Lawyer Anne Sammon has been helping clients with this and she told me that HR’s job, first and foremost, is to understand precisely how it works:
Anne Sammon “I think one of the really challenging things about artificial intelligence is its complexity and it's very easy to go into situations not fully understanding how the technology works and it’s so important that employers really do have a good understanding of what the technology is actually doing and how it's working so that they can help to identify if there are potential discrimination issues. Without that sort of knowledge of how the product works is very difficult to take mitigating steps to alleviate any disadvantage that the technology might be causing.”
Joe Glavina: “You’ve written about this for Outlaw and you’re saying that before implementing new technology it’s vital employers do some due diligence. What do you mean by that?”
Anne Sammon: “So I think there are two steps. The first step is the kind of procurement of that artificial intelligence tool and it's about making sure that the HR teams have the confidence to ask the right questions and don't allow themselves to be bamboozled by technological language. So, I think that's the first piece, asking questions so that you understand how it works and what it does is a key part of this. Then there's a separate piece about how you communicate that to the individuals who are subject to that technology. So, for example, if it's a recruitment exercise, I would expect HR teams to be talking to candidates about the technology that's being used, how and how it's being used, so that if there were questions raised by those candidates, in terms of potential disadvantage, the HR teams can respond accordingly.”
Joe Glavina: “Aisleen talks in her article about the ethical issues around the use of AI. Why should HR take notice of that?
Anne Sammon: “I think that there is the natural kind of human tendency to be interested in those issues. I also think that, from an HR perspective, having an awareness of what those issues are is quite important so that if employees, or potential recruits, challenge the use of AI on those bases the HR team are equipped to be able to provide reasoned, proper, responses rather than feeling kind of hijacked by those questions.”
That article by Aisleen Pugh is called: ‘Artificial intelligence in the workplace: implications for HR professionals’. It is available now from the Outlaw website.