Out-Law News Lesedauer: 2 Min.

Artificial intelligence ISO and IEC publish new international standard on AI

International standards-setting bodies ISO and IEC have jointly published the first global standard for AI management systems, providing guidance for structuring and overseeing AI systems prudently, ethically, and transparently whilst at the same time maintaining data privacy and information security.

While the new standard, ISO/IEC 42001, does not have the force of law, ISO/IEC standards are extremely influential in a global business context. They provide highly instructive  guidance to institutions that develop, provide or simply operate and use AI based products or services. Also, they seek to systematically ensure reliable, clear and responsible management throughout its lifecycle.

“Standards are frequently referred to both in contractual context and in litigious matters. Notably, courts like to use national as well as international standards to judge whether certain actions were up to date or done with adequate diligence,” said technology law expert Dr. Nils Rauer of Pinsent Masons. “Standards are a very important source of information and guidance across the board. It is fair to say that they ‘stamp’ market behaviour. Thus, the new AI standard clearly has the potential to become influential in that sense.”

The first edition of the standard, as published in December 2023, covers various aspects of artificial intelligence and provides an integrated approach to understanding and mitigating the risks inherent to deploying AI systems in an organisational context. However, standards are not static, and the new standard is expected to develop over time based on legislative processes such as the upcoming AI Regulation. Standards can also be influenced by emerging market trends or new risks.

Rauer Nils

Dr. Nils Rauer, MJI

Rechtsanwalt, Partner

Standards are a very important source of information and guidance across the board. It is fair to say that they ‘stamp’ market behaviour. Thus, the new AI standard clearly has the potential to become influential in that sense.

The first aspect of the new AI standard is on planning. Organisations should first consider how they intend to use AI, emphasising the importance of reflecting rather than hastily adopting the technology. “The standard requires careful and proactive management, such as comprehensive risk and impact evaluation. This is very similar to what already exists in other domains such as the data protection impact assessment requirement of Article 35 GDPR,” Rauer said. 

“Equally, these requirements contribute to the maintaining of adequate cyber security when deploying AI-related applications,” added Ben Gibbins, cyber expert at Pinsent Masons.

An essential part of the standard is the development and implementation of an AI policy. It stresses the importance of organisations establishing a structured approach to managing AI systems, which includes making clear and complete AI policies. The AI policy should cover different aspects such as ethical issues, transparency, continuous learning, risk management, and governance. By developing and applying AI policies, organisations can ensure the ethical development and use of the technology, show responsibility, and achieve transparency and reliability in their AI-related activities.

Provisions in the standard on resources, competence and awareness highlight the need for organisations to ensure that they have sufficient resources, including people and facilities, to support the governance of AI systems. These sections also address the importance of making sure that personnel involved in AI-related activities possess the necessary competence, education, and awareness of the ethical considerations, transparency and need for continuous learning associated with the technology.

Ben Gibbins

Cyber Security Advisor

AI systems are information systems that have unique traits; for example, their decision making is not always transparent and they continuously change during use. Along with their benefits they introduce new operational and information security risks that require the introduction of new security controls which ISO 42001:2023 helps to address.

“Other than developing documents such as an AI policy, it will also be important for businesses to establish processes that enable careful management of AI systems. Clause 8 on operational matters will therefore be of great importance,” Rauer said.

Clause 8 on operational matters discusses the management of AI systems within organisations. It includes provisions for ensuring the effectiveness of AI management systems and processes, as well as monitoring AI-related activities and addressing nonconformities. Process controls should be monitored, and appropriate actions considered if the intended outcomes are not reached. Additionally, organisations should keep written records of the outcomes of all AI risk assessments.

“Similarly, the mechanisms that handle nonconformity and corrective action (clause 10.2) are about using procedural measures and structure as the tools to reduce existing risks. This is a common approach in many areas,” Rauer said.

Putting this new standard in wider context, businesses tend to use ISO/IEC standards as an indication of quality and sustainability when they are procuring their services. They form a tool to generate constant proliferation over time. For example, in the financial services industry, most institutions require compliance with the standard ISO/IEC 27001 on IT security. A respective certification is a must. It can be assumed that ISO/IEC 42001 will embark on a similar journey as regards AI, Rauer added.

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.