Out-Law News 5 min. read

AI safety summit must spur more guidance for businesses now, say legal experts

ai-processing-education-deep-learning card


Businesses need governments and regulators to come together to provide greater clarification on how existing legislation and regulation globally applies to the use of artificial intelligence (AI) systems, experts have said.

Cerys Wyn Davies, Hannah Ross, Zoe Betts and Tom Aries of Pinsent Masons said the AI safety summit being hosted in the UK this week provides an opportunity to build international consensus on what risks are posed by AI – as well as more certainty for businesses on the steps they should take to address those risks.

The UK government has said there is an “increasingly urgent” need to address AI-related risks. In a paper published last week, it set out its views on the risks posed by ‘frontier AI’ – which it has defined as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models” – and warned, among other things, that tomorrow’s AI systems could “help bad actors to perform cyber attacks, run disinformation campaigns and design biological or chemical weapons”.

With its summit, the government is hoping to build a shared understanding of ‘frontier AI’ risk and the need for action; agree a way forward for international collaboration on frontier AI safety, including how best to support national and international frameworks; and identify appropriate measures which individual organisations should take to increase frontier AI safety.

Cerys Wyn Davies

Cerys Wyn Davies

Partner

The problem many businesses are facing at this time is that there is a lack of clarity over how existing law and regulation applies in the AI context

Wyn Davies said: “The clear emphasis of the summit is on ensuring AI safety as the development of the technology evolves and becomes more sophisticated and advanced. However, it is already becoming business-critical for organisations to harness the power of AI to, for example, achieve operational improvements or enhance their customer service. The problem many businesses are facing at this time is that there is a lack of clarity over how existing law and regulation applies in the AI context.”

“So, as well as looking to future AI safety, the government should use its summit to promote harmonised new rules, guidelines or standards globally to help businesses now,” she said.

Many policymakers and regulators globally already recognise the need to address AI-related risk, but different models of regulation are emerging.

For example, EU law makers are in final talks over on a proposed new AI Act, which could result in some AI systems being prohibited from use altogether and others being subject to stringent regulation – including around data quality, record-keeping, transparency, and human oversight.

In the UK, the government approach is, unlike the EU’s, focused not on regulating the technology per se but rather its use. It intends to retain the existing sectoral system of regulation but introduce a cross-sector framework of overarching principles that regulators will have to “interpret and apply to AI within their remits”. The five principles are safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

However, AI use also poses risks of compliance under existing legislation that businesses need to account for now.

Wyn Davies said: “A core example of this is in the context of data protection law, given the risk of personal data being processed in AI systems and – in the context of generative AI specifically – regurgitated in AI outputs, including in the context of entirely made-up AI ‘hallucinations’. In the UK, businesses will also be watching closely for progress on proposed data protection reforms which may make it easier for them to use AI in a way that supports automated decision making.”

Wyn Davies added that the onus is also on policymakers to ensure that efforts to promote AI development are balanced with the interests of content creators, highlighting that businesses in the UK are awaiting publication of a new AI copyright code this autumn.

Financial services regulators are among the authorities grappling with how the risks posed by AI should be addressed.

Industry would benefit from further guidance, such as on how to prevent, evaluate and mitigate the risk of bias when using AI, and on best practices using personal data in AI in the financial services context

Hannah Ross, who specialises in financial regulation, said: “One of the challenges for financial regulators is how to address the risks posed by a technology that is evolving so quickly – you only need to look at the speed with which generative AI systems have risen to prominence in the last 12 months for an example. EU law makers have had to adapt the proposals for the new EU AI Act to account for generative AI and the risk it poses because it was not accounted for specifically in the first draft, which was published as recently as April 2021.”

“In the UK, the financial regulators published, and opened a consultation on, a discussion paper on AI and machine learning last year. The responses recently published highlight a wariness within industry of efforts to define AI for the purposes of regulating it, given the speed of technological change in the market,” she said.

“While there are no specific policy proposals for AI at the moment, firms do need to be conscious of how existing regulatory requirements apply to AI – such as how to ensure good consumer outcomes in the context of using AI, with a view to meeting their obligations under the Financial Conduct Authority's (FCA's) new consumer duty; compliance with the FCA’s high-level Principles for Businesses; and, for senior managers specifically, how they can ensure they are comfortable with how AI is used in the products and services their firm delivers, given their accountability under the FCA's senior managers and certification regime (SM&CR),” she said.

“It is clear from the responses to the consultation, though, that industry would benefit from further guidance, such as on how to prevent, evaluate and mitigate the risk of bias when using AI, and on best practices using personal data in AI in the financial services context. Looking ahead, it is possible to imagine that the proposed new system of regulation for ‘critical third parties’ in UK financial services might impact AI providers active in the sector should firms’ dependence on such providers continue to grow,” Ross said.


Pinsent Masons is hosting an event on 14 November 2023 on transforming financial services with AI. Matthew Ichinose, senior product counsel at Google Cloud AI, is keynote speaker. Registration for the event is now open.


Important questions pertinent to AI liability also need to be addressed, according to Tom Aries of Pinsent Masons.

Aries said: “There is a huge question over who would be liable in the event that an AI system causes harm – for example, would a programmer be responsible for unconscious bias resulting in discrimination or another bad outcome for a consumer, or would liability rest with third party data providers for poor quality data, or the system implementer for a failure of due diligence? Beyond the regulatory sphere, it is a question that law makers are grappling with. EU law makers are scrutinising plans for a new AI liability law alongside wider reforms to product liability rules, while product safety and product liability reform also appears to be on the agenda in the UK, with implications for AI.”

Betts Zoe

Zoe Betts

Partner

Under UK law, employers must take all reasonably practicable measures to eliminate or mitigate risks associated with their work activities, including the use of AI systems

For employers, use of AI also has implications under health and safety law.

Health and safety law expert Zoe Betts said: “On the one hand, AI can seek to improve workplace safety through hazard monitoring and equipment control, introduce techniques which minimise human error, and support with crime detection and prevention. However, on the other hand, under UK law, employers must take all reasonably practicable measures to eliminate or mitigate risks associated with their work activities, including the use of AI systems.”

“Whilst the law recognises that elimination of all risks by an employer is often impractical, it is anticipated that the courts will expect organisations to take steps to mitigate risks associated with AI. It is therefore important for businesses to be able to demonstrate that they have, for example, undertaken a robust AI-specific risk assessment and properly and promptly implemented any safety measures identified in the risk assessment; that staff received comprehensive training that reflects best practices; that there is appropriate internal systems of AI governance; and that any concerns raised by colleagues were acted on,” she said.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.