The AI Act, dubbed the world’s first AI law, is set to come into force in the EU within weeks after the proposed legislation cleared a final vote.

The Council of Ministers approved the EU AI Act on Tuesday morning, following the EU’s other main law-making body, the European Parliament, adopting the Act back in March. The Council’s vote – which comes three years and one month after the initial draft of the EU AI Act was proposed by the European Commission and after navigation through thousands of proposed amendments during the legislative process – paves the way for the formal signing of the legislation and its subsequent entry into the Official Journal of the EU (OJEU).

The EU AI Act will come into force 20 days after its publication in the OJEU, though most of its provisions will not take effect for a further two years after that date.

Mark Ferguson

Head of Reputation, Crisis, and Client Operations

The EU AI Act is the first, not last, word on legislation in this space

Experts in technology law at Pinsent Masons said the adoption of the EU AI Act brings significant new regulatory requirements for businesses that provide, deploy, import, or distribute AI systems – and that the legislation is likely to have a major influence on the development of regulatory standards for AI in other jurisdictions globally.

Frankfurt-based Nils Rauer said: “The EU AI Act does not ‘cherry-pick’ specific areas of AI, for example the regulation of generative AI, but rather follows an all-embracing approach, trying to set the scene for developers, deployers and those affected by the use of AI.”

“EU legislators have decided to combine two structural concepts in one piece of law. There is the risk-based approach for AI systems and a separate concept applying to general-purpose AI (GPAI). The first follows the idea of AI systems applied in certain areas being deemed a high-risk application, the latter focuses on the potential of the AI model. The higher such potential is, the more likely we see a systemic risk. It remains to be seen how these two concepts work together,” he said.

Amsterdam-based Wouter Seinen said the EU AI Act has the potential to drive “digital compliance” in businesses in a better way than the GDPR has done.

“The GDPR attempts to change the mindset and attitude of businesses towards transparency and risk management, but compliance rates are still disappointing. The lack of standards and good practices on the one hand, and the enforcement strategies in Europe on the other, seem to play a role in this. It is good news that the AI Act will be driving the introduction of official standards, as this will help businesses verify their compliance. It is to be hoped that the enforcement of the AI Act will focus on nudging and educating, backed with systematic enforcement rather than setting examples by singling out a handful of companies and imposing massive fines on them.”

Under the AI Act, some types and uses of AI will be prohibited altogether, while the strictest regulatory requirements remaining are reserved for AI systems that are classed as ‘high-risk’ AI systems. Pinsent Masons has developed a guide to help businesses understand whether the AI systems they develop or use constitute high-risk AI.

Seinen said: “The introduction of data governance in respect of training, validation and test datasets pertaining to high-risk AI systems is a positive development as it will drive the maturity and hygiene of those using software and AI in their operations.”

“Some business may decide that it is more efficient to deploy the same governance and controls framework to all their AI systems rather than spending energy on determining classification – whether ‘high-risk’ or limited risk – and keeping evidence of why their system was classified in a certain tier,” he added.

Dublin-based Andreas Carney said financial services firms are among those likely to benefit from having set regulation on the use of AI.

“Regulated sectors such as financial services have been seeking to gain the benefits offered by AI for their businesses and customers, while at the same time needing to properly balance this with their risk profile,” Carney said. “Having this legislative framework in place now enables them to better manage that exercise and move toward adoption.”       

Madrid-based Paloma Bru said the Council's vote comes shortly after the Spanish government approved its new AI strategy, which will be deployed in 2024 and 2025. Fresh funding totalling €1.5 billion has been committed to the strategy’s implementation, in addition to the €600 million already mobilised.

“The AI strategy and its initiatives will be coordinated by the secretary of state for digitalisation and AI, but due to its scope and impact, the whole Spanish government will be actively involved,” Bru said.

Rauer Nils

Dr. Nils Rauer, MJI

Rechtsanwalt, Partner

China, the US and the UK are three countries that have taken recognisably different paths [on AI regulation to-date]

London-based Sarah Cameron said significant changes were made to the EU AI Act as it passed through the legislative process.

“Amongst the most significant changes since 2021 are the amendments to the definition of AI systems, to align more closely with the OECD definition – itself recently updated; the late inclusion of provisions around GPAI to address copyright and transparency of training data as well as governance and risk obligations; and the requirement for a fundamental rights impact assessment for high-risk categories of AI systems,” Cameron said.

Rauer said that the EU AI Act has the potential to influence other legislators in their effort to regulate AI, but he said the approaches taken by other countries in respect of the regulation of AI have, to-date, been different. He said China, the US and the UK are three countries that have “taken recognisably different paths”.

Cameron added: “While the EU seeks to set the gold standard for AI regulation as it has done in the field of data protection law with the GDPR, other countries have chosen to take a more agile and flexible approach to regulation. Nonetheless, there is a clear sense of increased global cooperation around AI regulation, particularly towards addressing the systemic risks of rapid advances in AI most notably around safety. As the EU AI Act standards are fully developed by EU standards bodies to enable full implementation of the EU AI Act, cooperation including around standards globally has the power to bring greater cohesion and navigability to both business and states alike.”

Public policy expert Mark Ferguson of Pinsent Masons said that the entry into force of the EU AI Act in the weeks ahead will only represent the beginning of a new legislative phase. He said individual EU member states will have to “grapple with the legislation” in order to give practical effect to its implementation, including the appointment of competent authorities for enforcing the Act in each jurisdiction. He added that new codes of practice for GPAI are also expected to be developed over the next 12 months and flagged that a review of the legislation to be undertaken by the European Commission could lead to amendments to the concept of prohibited AI prohibitions within the next year too, Ferguson said.

Ferguson said: “Businesses will see more and more regulation coming down the track, so the EU AI Act is the first, not last, word on legislation in this space. Businesses have an important role to play in shaping the next phase of regulation in the EU – and elsewhere – as the Commission will seek views on how it impacts business operations, innovation, and safety.”

Wesley Horion, also of Pinsent Masons, added: “As promising as the AI Act looks, it leaves a lot of ends open and the onus will be on the Commission together with the bespoke stakeholder committees to elaborate implementing acts that will bring more practical flesh to the framework’s bones. Only then will we be able to assess whether the Act is as innovation-friendly as intended by the legislator.”

The Council’s vote has coincided with the hosting of an AI summit in Seoul, where the UK government is co-hosting with the South Korean government.

The summit follows the AI safety summit hosted by the UK in November 2023, where some of the world’s leading powers, including the US and China, signed an international accord – the so-called Bletchley declaration – recognising the need for AI development and use to be “human-centric, trustworthy and responsible”. A new AI safety risk testing regime was also developed.

While the Seoul summit also addresses AI safety, other topics are also on the event agenda – including innovation and inclusivity. On Tuesday, the UK government announced that an expanded list of companies have now committed to developing next-generation AI systems, known as frontier AI, safely.

Last week, foreign ministers from the 46 member countries of the Council of Europe – which is not an EU institution – adopted the first international treaty regulating AI and its impact on human rights.

The treaty establishes rules requiring users of AI technology to make sure it does not undermine democracy, human rights, the rule of law and privacy. The text will be binding for member countries of the organisation as well as observer countries that sign and ratify it, and the rules apply to all public authorities. Individual governments can apply them to private actors such as AI developers as well, but governments can also opt out from doing so.

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.