Out-Law News

MPs call for urgent action on AI accountability


Lisa Byars tells HRNews about introducing AI technology into the workplace and the discrimination risk it brings
HR-News-Tile-1200x675pxV2

We're sorry, this video is not available in your location.

  • Transcript

    MPs have called on the government to take urgent action on AI accountability. An All-Party Parliamentary Group on the Future of Work has published a report detailing evidence about the impact of AI in the workplace work and the headline is they are recommending new legislation - an Accountability for Algorithms Act - which will impose new duties on employers.

    The report, ‘The New Frontier: Artificial Intelligence at Work’, explains that the use of AI in the workplace has markedly increased during the pandemic, including use of algorithmic surveillance, management and monitoring technologies. The MPs say there is a growing body of evidence pointing towards a significant negative impact on the conditions and quality of work across the country caused by the use of algorithms. They argue that the monitoring of workers and the setting of performance targets through algorithms is damaging employees’ mental health and workers need far more visibility about how their employers are using digital tools.

    The proposed new legislation would include the creation of two new duties on private and public sector bodies. First, a requirement to provide staff with a ‘full explanation’ of how any algorithm they use work, plus a requirement for firms to create algorithmic impact assessments to identify risks that that AI technology might bring. Secondly, workers would have the opportunity to give feedback on how these tools should be used in the future.
     
    The report comes shortly after the government published its national AI strategy in September which confirmed that the Office for AI will develop a national position on governing and regulating AI, which will be set out in a White Paper in early 2022.  

    Back in March 2021 the TUC was warning about how the use of AI by employers will lead to discrimination and unfair treatment of workers and they were calling for urgent legislative changes. That was based on the findings of a report carried out for the TUC by two leading employment rights lawyers Robin Allen QC and Dee Masters from the AI Law Consultancy. The authors said while AI could be beneficial, if it is used in the wrong way it can be ‘exceptionally dangerous’. Frances O’Grady expanded on that, telling the BBC that workers could be ‘hired and fired by algorithm’ and new legal protections were needed including a legal right to have any ‘high-risk’ decision reviewed by a human. She said ‘without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment – especially for those in insecure work and the gig economy’. 

    The discrimination risk is identified in this latest MPs’ report. Barrister Helen Mountfield QC from Matrix Chambers gave evidence to the Parliamentary Group and is quoted at page 11 of the report saying: ‘In general, the Equality Act does not impose any obligations on employers or software designers or anyone else to think about or avoid discrimination and disadvantage as a proactive duty… Human beings and organisations that use machines of this kind have to take responsibility.’

    So let’s consider that discrimination risk. Whilst the MPs report is looking ahead and calling for new legislation the reality is that AI is already taking off and many of our clients have already introduced some form of AI into their business, or they are planning to do that. Lisa Byars is currently helping a number of clients to address the risks as they introduce automation to their systems. She joined me by video-link from Aberdeen to discuss it:

    Lisa Byars: “Well, it's quite an interesting one because we do get a lot of employers, clients, coming to us now because there is an increased use of this automated technology and I think that a lot of employers wrongly believe that they are maybe less immune to discrimination claims because a computer is making these decisions. The issue that we find is actually it's the lack of human involvement actually that can lead to the risk of discrimination claims. So, for example, these automated technology systems rely on the data that's inputted into them and a computer is not equipped to identify and consider, or assess, the impact of the data, or even the impact of the system on individuals who have a protected characteristic because only humans really can do that effectively.”

    Joe Glavina: “Can you tell me about the potential claims that might arise when you have an over-reliance on machines when managing people, because the TUC is right, discrimination is a risk.”

    Lisa Byars: “Absolutely. The main claims that you'd be looking at would be claims of direct discrimination, indirect discrimination, and you're looking at claims that could come from employees, prospective employees and workers. These claims are going to be costly to defend for one, there is uncapped compensation and we find employers sometimes forget there is the reputational damage if they are found to be using a system without any assessment that does discriminate against individuals of a protected group.”

    Joe Glavina: “So what can employers do about it Lisa. What are the questions HR needs to be asking?” So, I see the problem but how do you fix it?

    Lisa Byars: “That is interesting and that is one of the key points, I suppose. I think the employer needs to start by considering do we use these automated technologies, one? If so, where do they use them and they need to be asking right from the start why are they using it and is it necessary? Also, looking at the data that they're using, is somebody assessing that data? Where are they getting the data from? Identifying all the problematic areas, I suppose, where they think the problems could arise and introducing human review into that element, that aspect of the process, to ensure that it has been impact assessed. Are the results of the automatic decision making being reviewed? Also, importantly, and one thing that employers sometimes do forget and we do remind him clients of, is the fact that employees that are carrying out the assessment really need to have the proper training to be able to identify the discrimination risks that are key here.”

    Joe Glavina: “So in summary, Lisa, what do you want HR to take away from this?”

    Lisa Byars: “It’s really just getting the key messages across to clients and key message number one is ensure that the clients identify the potential problem stages when using automated decision making, introducing human review at every stage of that problematic process, and invest and ensure that those involved at those points in carrying out those assessments have the training and are equipped to address and remedy any issues that are identified.”

    That report by MPs which has just been published and is calling for the government to introduce new legislation is called ‘The New Frontier: Artificial Intelligence at Work’. We have put a link to it in the transcript of this programme.

    LINKS

    - Link to called ‘The New Frontier: Artificial Intelligence at Work’

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.