Amsterdam-based technology law expert Wouter Seinen of Pinsent Masons said there are parallels between the distinction being drawn between models not in scope, GPAI models, and GPAI models ‘with systemic risk’ under the EU AI Act, and the distinction drawn between online platforms and ‘very large’ online platforms under the EU’s Digital Services Act.
Seinen said reliance on FLOP as a relevant metric for GPAI is flawed and that it is unclear how the Commission plans to audit the computation power used for training the model, as well as how computing power used to train a model is informative of the importance or risk level of that model.
“The approach does not appear ‘technology neutral’, which has been the gold standard for tech laws in Europe up until now,” Seinen said. “The Commission subtly acknowledges this where it states that ‘training compute is an imperfect proxy for generality and capabilities’ – and that is an understatement.”
“The proposal prompts the question of how imminent developments such as the rise of AI agents and agentic AI will interplay with the concept of GPAI and the scope of its legal definition, let alone what will happen once AI models are trained using quantum computing, as the FLOP measurement will not make a lot of sense in that use case,” he said.
Standardised ways for businesses to determine what amount of computational resources they use to train or modify AI models are set out in its working document.
The working document also highlights that the Commission is considering a data-related carve-out to regulatory exemptions that apply to AI models made accessible under a free and open-source licence under the GPAI regime. Those exemptions would not apply, according to the proposals, where providers of those models collect personal data “from the use of the model or the accompanying services” – other than for the purpose of using that data “to improve the security, compatibility or interoperability of the software”.
Among the obligations that providers of GPAI models face under the AI Act are duties to put in place an EU law-compliant copyright policy and enable rightsholders to reserve their rights not to have their works used for training. In its working document, the AI Office outlined plans to reduce associated compliance burdens for providers, relating to disclosing the source of data used to train their models, where they place their GPAI models on the market before 2 August 2025.
“The AI Office recognises that in the months following the entry into application of the obligations of providers of general-purpose AI models in the AI Act on 2 August 2025, some providers may face various challenging situations to ensure timely compliance with their obligations under the AI Act,” it said. “Accordingly, the AI Office is dedicated to supporting providers in taking the necessary steps to comply with their obligations.”
“In particular: for general-purpose AI models that have been placed on the market before 2 August 2025, providers must take the necessary steps to comply with their obligations by 2 August 2027. This does not require re-training or unlearning of models already trained before 2 August 2025, where implementation of the measures for copyright compliance is not possible for actions performed in the past, where some of the information for the training data is not available, or where its retrieval would cause the provider disproportionate burden. Such instances must be clearly justified and disclosed in the copyright policy and the summary of the content used for training,” the AI Office added.
According to the AI Office, signatories to the new GPAI code of practice can expect their adherence to the code to be a focus of the Commission’s enforcement activities, once the GPAI regime takes effect. It added that commitments made in a code of practice could also be considered “a mitigating factor” when it is deciding what level of fine to impose for non-compliance.
For providers that elect not to adhere to the code, the AI Office said those businesses will be “expected to demonstrate how they comply with their obligations under the AI Act via other adequate, effective, and proportionate means”. It further suggested that those providers will be subject to “more requests for information and access to conduct model evaluations, since there may be less clarity regarding how they ensure compliance with their obligations under the AI Act”.