The first aspect of the new AI standard is on planning. Organisations should first consider how they intend to use AI, emphasising the importance of reflecting rather than hastily adopting the technology. “The standard requires careful and proactive management, such as comprehensive risk and impact evaluation. This is very similar to what already exists in other domains such as the data protection impact assessment requirement of Article 35 GDPR,” Rauer said.
“Equally, these requirements contribute to the maintaining of adequate cyber security when deploying AI-related applications,” added Ben Gibbins, cyber expert at Pinsent Masons.
An essential part of the standard is the development and implementation of an AI policy. It stresses the importance of organisations establishing a structured approach to managing AI systems, which includes making clear and complete AI policies. The AI policy should cover different aspects such as ethical issues, transparency, continuous learning, risk management, and governance. By developing and applying AI policies, organisations can ensure the ethical development and use of the technology, show responsibility, and achieve transparency and reliability in their AI-related activities.
Provisions in the standard on resources, competence and awareness highlight the need for organisations to ensure that they have sufficient resources, including people and facilities, to support the governance of AI systems. These sections also address the importance of making sure that personnel involved in AI-related activities possess the necessary competence, education, and awareness of the ethical considerations, transparency and need for continuous learning associated with the technology.