Out-Law Analysis Lesedauer: 1 Min.

Why EU lawmakers won’t allow the AI Act to fall

Photo by Christian Liewig - Corbis/Corbis via Getty Images

There are political and economic reasons why EU law makers will not allow the proposed new EU AI Act to fall, despite some lingering uncertainty about whether, and in what form and when, the regulations might be finalised.

This means businesses should prepare as though the draft legislation will be formally adopted this spring.

The state of play with the AI Act

In December last year, lawmakers announced that they had reached a provisional agreement on the proposed AI Act, which provides for the risk-based regulation of AI systems across EU member states.

The announcement followed “marathon talks” involving the then Spanish presidency of the Council of Ministers, negotiators representing the European Parliament, and officials from the European Commission, which had laid out its original proposals for a new AI Act in April 2021.

Subsequently, the consolidated text apparently representing what was provisionally agreed was leaked online, giving developers and users of AI systems the first chance to consider in detail what the proposed new framework would mean for them.

In the EU law making process, when a consolidated text becomes available following the announcement of a provisional agreement at the end of trilogue negotiations between the different law-making institutions, it typically signals that the procedural process for adopting the text can begin. That process will involve both the European Parliament and the Council of Ministers moving towards formal votes on the text – both institutions must adopt proposed EU legislation before it can be written into EU law. The process is normally routine, reflecting the fact that significant consensus has already been built during the trilogue negotiations and that there will already have been intense scrutiny of and debate over the text earlier in the legislative process.

However, the position does not appear as straightforward with the EU AI Act, with reports that France and Germany are leading efforts to achieve late amendments to the text.

General purpose AI models and the French and German concerns

France and Germany are reportedly concerned about the potential impact that some provisions of the proposed AI Act might have on innovation.

Politico reported that proposed transparency obligations that would impact providers of “general purpose AI models” have raised particular concern. Italy and Austria are said to be equally inclined to favour less stringent rules on those models. 

According to the leaked consolidated text, a general purpose AI model is defined as “an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications”, with the exception of AI models that are “used before release on the market for research, development and prototyping activities”.

Under the proposed new legislation, providers of those AI models would be obliged to draw up and make publicly available a sufficiently detailed summary about the content used for training of their AI model.

A recital in the draft text further expands on this proposed disclosure duty. It states: “While taking into due account the need to protect trade secrets and confidential business information, this summary should be generally comprehensive in its scope instead of technically detailed to facilitate parties with legitimate interests, including copyright holders, to exercise and enforce their rights under Union law, for example by listing the main data collections or sets that went into training the model, such as large private or public databases or data archives, and by providing a narrative explanation about other data sources used.”

According to Politico, a French government official has said that France is seeking to re-open trilogue talks on the wording of the AI Act, with Germany also “pushing for more business-friendly strictures in the law”. The publisher, however, also cited an unnamed EU official as saying that the views of France, Germany, and Italy, which also seeks changes, are not shared by other EU member states.

Representatives of the governments of EU member states are due to discuss the EU AI Act at a meeting of the Council of Ministers on Friday 2 February. Originally, it had been envisaged that the meeting would result in a rubber-stamping of the text developed since the political deal reached in December. However, with France and Germany seeking changes, there is now some uncertainty over the Council’s timelines for adopting the Act.

Why compromise soon is likely

Despite the apparent differences of opinion and late hitch in a deal being finalised, there is likely to be final agreement on and adoption of a new EU AI Act this spring. There are political and economic drivers of why that is the most likely outcome.

The AI Act is a flagship piece of legislation that EU officials have trumpeted as the world’s first ever law on AI – its approval is a priority for EU leaders. There is broad consensus across the European Parliament and Council of Ministers on the need for new regulation to address AI risks, and the reality is that there is agreement on the vast majority of the proposed new text.

In this context, there is a clear incentive for law makers to adopt the EU AI Act before the European Parliament elections take place in June this year – and, more precisely, by the date of the last pre-election plenary session in the Parliament, which is scheduled for between 22 and 25 April. If the text is not adopted before the elections, it could result in newly-elected MEPs – potentially with different political views and agendas from the current cohort – reopening, rather than ratifying, the file and seeking substantive amendments.

At the very least, a failure by EU law makers to adopt the AI Act this spring would lead to delay in the introduction of an EU-wide framework for addressing AI risks. This would have a significant negative economic impact, not least because the EU is in a global race to ensure it operates a regulatory environment for AI that ensures there is sufficient protection against potential harms arising from its use but also enough scope for AI development and resultant innovation that can deliver improved productivity and economic growth. The US and UK are among the many countries globally exploring how to best regulate the use of AI.

Delay in the adoption of EU-wide AI rules would also leave a vacuum for individual EU countries to develop their own national AI laws in response to the emerging new risks. AI technology has evolved quickly – so much so that the regulation of general purpose AI models was not envisaged by the European Commission at the time it published its original AI Act proposals. Some EU countries might consider that in such a fast-moving market they cannot wait for EU-wide rules to be finalised beyond this spring. In that scenario, there is the potential for a patchwork of different regulatory requirements and standards to emerge within the EU, bringing with it increased compliance costs for Europe’s AI start-ups – and other AI developers seeking to enter the EU market. This would represent a real barrier to cross-border business and growth.

While it is possible for EU legislation to pass without the support of France and Germany, it is implausible that regulations central to the future of the EU’s economy would be imposed on countries of the size and importance of France and Germany by other member states. The political and economic drivers mean that if France and Germany table sensible, targeted amendments to address their lingering concerns about aspects of the AI Act, it is likely that a compromise will be reached in the Council of Ministers that will set in train the chain of events needed to enable the legislation to be adopted this spring.

Actions for businesses

The EU AI Act will not mark the end of AI regulation in Europe. AI is developing far faster than anyone can govern or regulate it, and it is quite possible legislators will never be able to catch up or contain the technology. This will most likely lead to the policymakers reacting to AI after something has gone wrong, rather than in advance of it, if at all. Businesses should therefore expect the regulatory framework for AI to evolve.

There are many, many opportunities for businesses in the uptake and use of AI, but the risks that AI presents to a business’ reputation are also great. This requires action to be taken.

Comprehensive compliance strategies

Businesses should develop a thorough understanding of existing and emerging AI regulations worldwide. They must stay abreast of changes in policies, standards, and legal frameworks across different regions.

Establishing a dedicated compliance team to monitor and ensure adherence to global AI regulations can help. This team should include legal experts, data scientists, and ethicists who can collectively navigate the complex landscape of AI governance.

Implementing regular audits and assessments to evaluate the alignment of AI practices with evolving regulatory standards will also be important. This proactive approach helps identify and address compliance gaps before they escalate into significant issues.

Ethical AI practices

Businesses should prioritise ethical considerations in AI development and deployment. They should design AI systems that uphold principles such as fairness, transparency, and accountability to mitigate the risk of non-compliance with diverse regulatory standards.

Engaging in ongoing dialogue with stakeholders, including customers, policymakers, and advocacy groups, to understand their concerns and expectations regarding AI applications, is also important. This will ensure that business practices align with societal values and regulatory expectations.

Businesses should also establish clear guidelines for the ethical use of AI within the organisation, incorporating input from diverse perspectives. They should promote responsible AI practices through employee training and awareness programs to create a culture of ethical AI use.

Global collaboration and advocacy

Businesses should participate in industry collaborations and consortia focused on establishing common standards for AI. By engaging with international organisations, industry peers and policymakers, companies can contribute to the development of global norms that balance innovation with ethical and regulatory considerations.

It is also in a business’ interest to advocate for clear, harmonised, and internationally recognised AI regulations. Businesses should seek to actively participate in forums where they can provide input into the shaping of AI governance frameworks.

By proactively engaging with policymakers, businesses can also provide input on the practical implications of proposed regulations – and emphasise the importance of fostering innovation while ensuring responsible AI deployment.

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.