The government has published its long-awaited White Paper aimed at guiding the use of Artificial Intelligence (AI) in the UK. The paper, ‘AI Regulation: A Pro-Innovation Approach’ sets out the government’s proposals to regulate AI in a ‘pro-innovation manner’ with a ‘light touch’ approach to regulation. It acknowledges the potential benefits of AI, such as improving healthcare, enhancing transport systems, and boosting economic productivity, while also recognising the potential risks and challenges associated with this emerging technology, including unlawful discrimination.
The government’s framework is underpinned by five principles that are intended to guide how regulators approach AI risks. They are: (1) Safety, security and robustness; (2) Appropriate transparency and explainability; (3) Fairness; (4) Accountability and governance; and (5) Contestability and redress.
The government is not proposing to introduce new legislation specifically to deal with the use of AI for fear it would stifle innovation – so no new AI regulator. Instead, they will be relying on existing regulators, such as the Health and Safety Executive, Equality and Human Rights Commission and the Information Commissioner’s Office to police AI through existing frameworks which will be modified as necessary. It is expected that the various regulators will issue new guidance in due course on the application of the five AI principles within their own remit in accordance with existing laws and regulations.
Notably, that approach to AI is different to the EU’s approach. The UK's approach is light touch with minimal regulation designed to promote innovation and experimentation. In contrast, the EU’s approach is more cautious, with greater regulation to ensure that AI is used ethically and in the public interest across member states. That divergence in approach is likely to present challenges for companies with operations in both the UK and EU.
The White Paper contains over 30 consultation questions for stakeholders and is open for responses until 12 June 2023. After that the government plans to issue the cross-sectoral principles to regulators, together with initial guidance for their implementation. The government says it hopes to publish that in the next six months, alongside its response to the consultation.
Meanwhile where does this leave HR professionals given that many businesses are already buying in and using AI in all sorts of ways across the business? That has been an ongoing concern for two of the UK’s regulators, the Equality and Human Rights Commission and the ICO, who have both issued warnings about the potential discriminatory impact of using AI.
So, this is a fast-moving area which HR needs to stay on top of. Earlier Anne Sammon joined me by video-link to discuss this, and understand what HR can do to minimise the risks:
Anne Sammon “I think one of the really challenging things about artificial intelligence is its complexity and it's very easy to go into situations not fully understanding how the technology works and it’s so important that employers really do have a good understanding of what the technology is actually doing and how it's working so that they can help to identify if there are potential discrimination issues. Without that sort of knowledge of how the product works is very difficult to take mitigating steps to alleviate any disadvantage that the technology might be causing.”
Joe Glavina: “In your Out-Law article you say that before implementing AI tool it’s vital employers do some due diligence. What do you mean by that?”
Anne Sammon: “So I think there are two steps. The first step is the kind of procurement of that artificial intelligence tool and it's about making sure that the HR teams have the confidence to ask the right questions and don't allow themselves to be bamboozled by technological language. So, I think that's the first piece, asking questions so that you understand how it works and what it does is a key part of this. Then there's a separate piece about how you communicate that to the individuals who are subject to that technology. So, for example, if it's a recruitment exercise, I would expect HR teams to be talking to candidates about the technology that's being used, how and how it's being used, so that if there were questions raised by those candidates, in terms of potential disadvantage, the HR teams can respond accordingly.”
Joe Glavina: “You also mention in your article that there are ethical issues around the use of AI. Why should HR take notice of that?
Anne Sammon: “I think that there is the natural kind of human tendency to be interested in those issues. I also think that, from an HR perspective, having an awareness of what those issues are is quite important so that if employees, or potential recruits, challenge the use of AI on those bases the HR team are equipped to be able to provide reasoned, proper, responses rather than feeling kind of hijacked by those questions.”
The government’s White Paper is called: ‘A pro-innovation approach to AI regulation’ and the consultation will be open until 12 June. If you would like to respond you can. Annex C contains details of how do that. We have put a link to that Policy Paper in the transcript of this programme.