London-based Sarah Cameron said significant changes were made to the EU AI Act as it passed through the legislative process.
“Amongst the most significant changes since 2021 are the amendments to the definition of AI systems, to align more closely with the OECD definition – itself recently updated; the late inclusion of provisions around GPAI to address copyright and transparency of training data as well as governance and risk obligations; and the requirement for a fundamental rights impact assessment for high-risk categories of AI systems,” Cameron said.
Rauer said that the EU AI Act has the potential to influence other legislators in their effort to regulate AI, but he said the approaches taken by other countries in respect of the regulation of AI have, to-date, been different. He said China, the US and the UK are three countries that have “taken recognisably different paths”.
Cameron added: “While the EU seeks to set the gold standard for AI regulation as it has done in the field of data protection law with the GDPR, other countries have chosen to take a more agile and flexible approach to regulation. Nonetheless, there is a clear sense of increased global cooperation around AI regulation, particularly towards addressing the systemic risks of rapid advances in AI most notably around safety. As the EU AI Act standards are fully developed by EU standards bodies to enable full implementation of the EU AI Act, cooperation including around standards globally has the power to bring greater cohesion and navigability to both business and states alike.”
Public policy expert Mark Ferguson of Pinsent Masons said that the entry into force of the EU AI Act in the weeks ahead will only represent the beginning of a new legislative phase. He said individual EU member states will have to “grapple with the legislation” in order to give practical effect to its implementation, including the appointment of competent authorities for enforcing the Act in each jurisdiction. He added that new codes of practice for GPAI are also expected to be developed over the next 12 months and flagged that a review of the legislation to be undertaken by the European Commission could lead to amendments to the concept of prohibited AI prohibitions within the next year too, Ferguson said.
Ferguson said: “Businesses will see more and more regulation coming down the track, so the EU AI Act is the first, not last, word on legislation in this space. Businesses have an important role to play in shaping the next phase of regulation in the EU – and elsewhere – as the Commission will seek views on how it impacts business operations, innovation, and safety.”
Wesley Horion, also of Pinsent Masons, added: “As promising as the AI Act looks, it leaves a lot of ends open and the onus will be on the Commission together with the bespoke stakeholder committees to elaborate implementing acts that will bring more practical flesh to the framework’s bones. Only then will we be able to assess whether the Act is as innovation-friendly as intended by the legislator.”
The Council’s vote has coincided with the hosting of an AI summit in Seoul, where the UK government is co-hosting with the South Korean government.
The summit follows the AI safety summit hosted by the UK in November 2023, where some of the world’s leading powers, including the US and China, signed an international accord – the so-called Bletchley declaration – recognising the need for AI development and use to be “human-centric, trustworthy and responsible”. A new AI safety risk testing regime was also developed.
While the Seoul summit also addresses AI safety, other topics are also on the event agenda – including innovation and inclusivity. On Tuesday, the UK government announced that an expanded list of companies have now committed to developing next-generation AI systems, known as frontier AI, safely.
Last week, foreign ministers from the 46 member countries of the Council of Europe – which is not an EU institution – adopted the first international treaty regulating AI and its impact on human rights.
The treaty establishes rules requiring users of AI technology to make sure it does not undermine democracy, human rights, the rule of law and privacy. The text will be binding for member countries of the organisation as well as observer countries that sign and ratify it, and the rules apply to all public authorities. Individual governments can apply them to private actors such as AI developers as well, but governments can also opt out from doing so.