Out-Law / Your Daily Need-To-Know

Out-Law News 1 min. read

Experts back new global guidelines for AI security


New guidelines for secure artificial intelligence (AI) system development have been welcomed by two regulatory experts.

Agencies from 18 countries, including the US, endorsed new UK-developed guidelines on AI cyber security, led by GCHQ’s National Cyber Security Centre. The guidelines aim to raise the cyber security levels of artificial intelligence and help ensure that it is designed, developed and deployed securely.

Stuart Davey, cybersecurity expert at Pinsent Masons, said: “These global guidelines, led by the UK, mark a significant step in the ongoing AI discussions. The encouragement for developers to prioritise security throughout the AI lifecycle is welcome. The focus extends beyond design and development to deployment, operation, and maintenance.”

“Cyber risks to AI systems pose legal and operational threats, including regulatory liabilities, breach of confidentiality, data security issues, and disruptions to business processes. It is the users of AI which bear much of the risk. However, the guidelines emphasise that it is the developers of AI systems who are best placed to understand the potential risks to those systems. Whilst many cyber risks to AI systems have commonality with other technology solutions, there are distinct considerations for AI security such as integrity attacks and model poisoning” Davey added.

Cybersecurity advisor Regina Bluman, of Pinsent Masons’ cyber professional services team, said: “The number of allies in the signing of these guidelines is positive, especially given the potential for AI to worsen nationalism globally. Big cyber and tech leaders, such as Australia, Israel, Canada, US, and Estonia, are involved, signalling that cooperation is crucial for positive advancements in AI. Tech transcends borders, making collaboration on guidelines essential.”

She added: “The guidelines emphasise SDLC basics for software development. Despite the challenges faced by many organisations, particularly with legacy systems, it’s important not to accumulate technical debt in AI development wherever possible. Developers must prioritise secure principles, document technical debt promptly, and ensure privacy and security are a feature and not an afterthought, considering the rapid growth in this space.”

The guidelines are broken down into four key areas – secure design, secure development, secure deployment, and secure operation and maintenance – complete with suggested behaviours to help improve security.

“The guidelines also stress the importance of responsible AI usage within a trusted environment. Immediate implementation of responsible AI policies is crucial, as staff and businesses need to understand the AI capabilities and vulnerabilities that these new tools introduce. Organisations must manage ‘shadow AI’ cautiously.,” Bluman said.

She added: “The guidelines also address the need for knowledge sharing in the AI community. Encouraging experiences, learning from others, and collaborating with trusted partners are crucial for avoiding mistakes. Governments and tech giants bear the responsibility of sharing knowledge to ensure AI’s positive impact, necessitating safe forums for practitioners.”

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.