MEPs voted overwhelming in favour of adopting the regulation on Wednesday – 523 MEPs voted in favour, 46 voted against, and there were 49 abstentions.
Under the EU AI Act, a new risk-based system of regulation applicable to AI will apply across EU member states.
Under the new framework, some uses of AI will be prohibited entirely, while the strictest regulatory requirements are reserved for ‘high-risk’ AI systems and the providers and deployers of such systems. Dr Nils Rauer of Pinsent Masons in Frankfurt said that the area in which an AI system is applied could trigger its ‘high-risk’ categorisation, such as in several contexts across the educational sector, in critical infrastructure, or in the context of administration of justice and democratic processes, for example.
Where AI systems are categorised as ‘high-risk’, the technology would need to meet mandatory requirements around issues such as risk management, data quality, transparency, human oversight and accuracy, for example, while the providers and deployers of those systems would face a range of duties, including around registration, quality management, monitoring, record-keeping, and incident reporting. Further obligations will apply to importers and distributors of high-risk AI systems.
Another layer of regulatory obligations will also apply to so-called general purpose AI models (GPAI). Under the AI Act, such GPAI models need to disseminate detailed summaries about the content used to train those AI models and – in the case of general purpose AI models that pose “systemic risk” – obligations around model evaluation and adversarial testing.
Rauer said the requirements, taken as a whole, represent “a matrix of regulatory measures introduced by the EU AI Act”.
Paloma Bru of Pinsent Masons in Madrid said: “In Spain, the approval of the EU's AI Act will have a direct and immediate impact on AI testing that has been carried out under the framework of the Spanish AI sandbox – the first European AI test environment linked to the EU AI Act, which has been operating since last year. AI projects that have participated in sandbox testing in Spain based on earlier versions of the law will now need to review whether they need to adapt their solutions to accord with the finalised legislative text prior to it taking effect.”
Amsterdam-based Wouter Seinen of Pinsent Masons said: “The EU AI Act is the type of legislation that many will argue should have been introduced 30 years ago to account for developments in software more generally. It will have a wide impact across sectors and will require businesses developing, supplying and using AI to shift how they think about the technology – regulatory compliance will now be front-and-centre, alongside consideration of the commercial opportunities AI presents.”
“Although the EU AI Act is a regulation that will have direct application in each EU member states, it will need to be supplemented with national legislation that provides for its implementation across the 27 different jurisdictions. Within this, there is scope for divergence in how EU countries provide for enforcement. The Dutch data protection authority is earmarked as the relevant supervisory authority in the Netherlands, despite questions being raised over the extent of its knowledge and skills on AI and the new law in this area,” he said.