Out-Law / Your Daily Need-To-Know

Out-Law News 10 min. read

EU digital simplification to soften data, AI rules

EU commissioner Henna Virkkunen with EU flag_Digital - SEOSocialEditorial image

EU commissioner Henna Virkkunen said the proposals would give “space for innovation to happen and to be marketed in Europe”. Thierry Monasse/Getty Images.


Plans to relax data protection laws to facilitate AI development and use in the EU have been outlined as part of a series of measures designed to boost growth and innovation in the trading bloc.

Data and technology law specialists Michelle Seel, Thijs Kelder, Anne-Sophie Mouren, Alex Ha Kyung Kim, Daniel Widmann, Anna Flanagan, and Wesley Horion of Pinsent Masons broadly welcomed the proposals as representing a “sea change” in the EU’s approach to regulating digital companies and markets, while public policy expert Mark Ferguson said EU policymakers’ pursuit of pragmatism in the name of bolstering competitiveness does bring some uncertainty for businesses and makes scenario planning essential.

A raft of legislation is earmarked for reform across the proposals for a Digital Omnibus Regulation and Digital Omnibus on AI Regulation that the European Commission published on Wednesday – including centrepiece EU frameworks like the General Data Protection Regulation (GDPR) and AI Act.

Amsterdam-based Michelle Seel of Pinsent Masons said: “By streamlining overlapping obligations and amending outdated rules, the proposal tackles the criticism the EU has faced that it operates a regulatory thicket that stifles innovation and competitiveness in Europe. Policymakers will hope it gives start-up and small tech companies breathing room in Europe’s digital economy.”

Significant reforms to the GDPR proposed

The GDPR reform proposals are among the most significant, as they include plans to alter the very definition of ‘personal data’ – the processing of which is subject to the GDPR’s rules.

Under the Commission's proposals, information that organisations hold would not be said to constitute personal data if they lack the “means reasonably likely to be used to identify the natural person to whom the information relates”. In those circumstances, the organisation in question “would not, in principle, fall within the scope” of the GDPR in respect of that data.

The Commission said that if the information is shared with third parties that do have the means reasonably likely to identify the person to whom the information relates, then those third party organisations would need to treat the data as personal data.

UK-based Anna Flanagan of Pinsent Masons said: “The proposed clarification could be a game-changer for businesses handling pseudonymised or aggregated datasets – where re-identification is not feasible for them – as they may avoid the full weight of GDPR obligations. For cloud providers, analytics platforms, and AI developers, this could reduce compliance burdens and unlock greater flexibility in leveraging data for innovation, without compromising privacy where identification risk is genuinely low.”

Other proposed GDPR reforms are designed to support AI development and use.

For example, the Commission wants to make it explicit in EU law that businesses have a ‘legitimate interest’ in processing personal data in the context of “the development and operation of an AI system.”

While this would not amount to an automatic right to process such data – the processing would need to be ‘necessary’ and businesses would still have to carry out a balancing test to make sure their ‘legitimate interest’ is not overridden by the rights and interests of the data subjects to whom the information relates – it would strengthen the lawful basis for such processing without individuals’ explicit consent.

The Commission said: “When the controller … is balancing the legitimate interest pursued by the controller or a third party and the interests, rights and freedoms of the data subject, consideration should be given to whether the interest pursued by the controller is beneficial for the data subject and society at large, which may for instance be the case where the processing of personal data is necessary for detecting and removing bias, thereby protecting data subjects from discrimination, or where the processing of personal data is aiming at ensuring accurate and safe outputs for a beneficial use, such as to improve accessibility to certain services.”

“Consideration should also, among others, be given to reasonable expectations of the data subject based on their relationship with the controller, appropriate safeguards to minimise the impact on data subjects’ rights such as providing enhanced transparency to data subjects, providing an unconditional right to object to the processing of their personal data, respecting technical indications embedded in a service limiting the use of data for AI development by third parties, the use of other state of the art privacy preserving techniques for AI training and appropriate technical measures to effectively minimise risks resulting, for example, from regurgitation, data leakage and other intended or foreseeable actions,” it added.

A further amendment proposed to the GDPR would lift existing restrictions on the processing of ‘special categories’ of data where the processing is undertaken in the context of AI development and use. Examples of special category data include information about a person’s health, race, religion or political beliefs.

“The derogation should only apply where the controller has implemented appropriate technical and organisational measures in an effective manner to avoid the processing of those data, takes the appropriate measures during the entire lifecycle of an AI system or AI model and, once it identifies such data, effectively remove them,” the Commission said. “If removal would require disproportionate effort, notably where the removal of special categories of data memorised in the AI system or AI model would require re-engineering the AI system or AI model, the controller should effectively protect such data from being used to infer outputs, being disclosed or otherwise made available to third parties.”

Paris-based Anne-Sophie Mouren of Pinsent Masons said further GDPR reforms that have been proposed would help to reduce some of the administrative burdens that organisations face in handling requests from individuals for a copy of the personal data they hold about them – so-called data subject access requests (DSARs).

In this regard, the Commission has proposed to make it easier for organisations to refuse to deal with DSARs where the data subject “abuses the rights conferred by this regulation for purposes other than the protection of their data”. The changes envisaged are designed to make it easier for businesses to class such requests as “manifestly unfounded or excessive”.

Mouren said: “This change would be welcome by many data controllers, especially employers facing an increasing number of DSARs from their employees in the context of disputes with their employers. Employers are currently unable to refuse such request on the grounds or suspicion that the data requested could be used by the employee to defend themselves in court in the event of dismissal. Yet, should this proposal be implemented it would still raise a number of difficulties from an evidence perspective, in case of refusal by the employer.”

Reporting obligations to be streamlined

Other reforms are envisaged to streamline organisations’ obligations around the reporting of security and data breaches that arise across a suite of often overlapping EU regulations.

Munich-based Daniel Widmann of Pinsent Masons said: “The move to limit GDPR breach notifications to only critical incidents is a long-overdue step that will reduce unnecessary administrative work for businesses and regulators alike. The introduction of a single reporting platform for IT security and data protection incidents is also a major win for companies, who have faced increasing complexity in meeting parallel obligations under NIS2, DORA, CRA, and the GDPR. For regulated companies, the digital omnibus package means less red tape and a more focused approach to incident reporting.”

“Businesses should review their internal processes now to ensure they are ready for the new, risk-based notification regime,” he said.

'Cookie banners' in the firing line

The Commission’s package also provides for updates to EU rules on ‘cookies’, with a view to addressing “consent fatigue and proliferation of cookies banners”. Three main changes are envisaged.

First, the Commission plans to change how the processing of personal data stored in terminal equipment is governed. The processing would be governed by the GDPR, not the Privacy and Electronic Communications (e-Privacy) Directive, and – as now – generally require the consent of the data subject.

Second, it has proposed to extend the circumstances in which the processing of such data will be considered ‘necessary’ – and therefore not require consent. If implemented, it will mean that processing personal data contained in cookies for data analytics or service security reasons will not require consent, nor where the processing is undertaken when a user has explicitly requested a service.

Third, the Commission has proposed legislative amendments that provide for device users’ consent choices to be indicated – and respected – in an automated and machine-readable way, once technical standards have been developed to enable this.

London-based Alex Ha Kyung Kim of Pinsent Masons said: “Together, these measures streamline user experience while preserving privacy safeguards.”

A delay to rules on 'high-risk' AI?

The Commission’s Digital Omnibus on AI Regulation proposal contains “targeted simplification measures” aimed at addressing challenges identified in the implementation of the AI Act.

The AI Act was written into EU law last year but only some of the provisions have taken effect so far – prohibitions on certain types and uses of AI began applying in February, while rules impacting providers of so-called ‘general purpose AI’ (GPAI) models came into effect in August.

Rules applicable to ‘high-risk’ AI systems are not due to come into effect until August 2026. In recent months, however, the Commission has come under pressure from industry and politicians globally to delay the implementation of those rules. Now, the Commission has outlined plans which could delay their implementation until near the end of 2027.

Under the Commission’s plans, implementation of the ‘high-risk AI’ regime would be linked “to the availability of standards or other support tools”, such as new guidelines. Depending on how the ‘high-risk’ AI system should be categorised under the AI Act, the rules applicable to those systems would not take effect until either six or 12 months have passed from the point that new standards or support tools have been put in place – with respective backstop dates proposed of 2 December 2027 and 2 August 2028 for implementation.

Edinburgh-based Mark Ferguson added: “The proposed delay to high-risk AI rules is not just a technical adjustment – it signals a broader shift in EU regulatory politics toward pragmatism and competitiveness.”

“Businesses should recognise that simplification is a political priority, but it comes with uncertainty: fragmented coalitions and a more polarised European Parliament mean future compromises will be harder to predict. Companies need to plan for flexibility, engage early in consultations, and anticipate that enforcement timelines may remain fluid,” he said.

“Businesses should treat this as a warning: compliance timelines are moving targets, and political fragmentation means sudden shifts are likely. Proactive scenario planning and regulatory engagement are no longer optional – they’re essential for resilience,” Ferguson said, adding that the EU's recalibration on AI regulation is part of a broader competitiveness narrative.

“The delay signals that the EU is attempting to balance innovation with oversight in response to global pressures from the US and China. Businesses should anticipate more pragmatic adjustments ahead but also prepare for political unpredictability as the Parliament becomes more polarised,” he said.

According to the Commission’s proposals, a suite of new EU guidelines relevant to the high-risk AI regime will be developed. The guidance will address everything from the classification of high-risk AI systems to the requirements associated with such systems and the specific obligations providers and deployers of high-risk AI systems face. It will also address the reporting of serious incidents by providers of high-risk AI systems and post-market monitoring.

The Commission further intends to remove the obligation for providers of high-risk systems to register those systems in a central EU database when they are exempted from high-risk classification.

Other AI-related measures proposed include making the EU’s AI Office responsible for overseeing compliance of AI systems built on GPAI models or embedded in ‘very large online platforms’ and ‘very large online search engines’ per the classification system under the EU Digital Services Act.

Existing provisions requiring providers and deployers of AI systems to ensure employees understand AI systems will also be watered down, under the Commission’s plans. In future, the companies will only be “encouraged” to do so.

Frankfurt-based Wesley Horion of Pinsent Masons said: “While reducing regulatory burdens is a positive goal, this change raises serious practical and reputational risks. Without a clear obligation, employees may use complex AI tools without adequate training, increasing the likelihood of misuse and exposing organisations to liability and reputational damage.”

“Giving employees advanced technology without proper training is like handing over heavy machinery without instruction, it’s a recipe for negligence. What looks like regulatory relief could become a poisoned gift, offering short-term simplicity at the cost of long-term trust and risk management. Organisations should treat AI literacy as best practice, regardless of regulatory shifts, to safeguard both operational integrity and public confidence,” he said.

A consolidation of other data rules

Widmann also flagged noteworthy changes the Commission has proposed to the EU Data Act.

The Data Act is designed to foster a competitive data market, encourage data-driven innovation, and increase data availability. The legislation, among other things, seeks to ensure fairness in the allocation of data value among all actors in the data economy, and clarify who can use what data and under which conditions.

Under its proposed reforms, data holders would, however, have greater scope to refuse to disclose trade secrets. Other changes planned would streamline and consolidate a raft of rules that apply across other EU legislation, such as the Data Governance Act, Regulation on free flow of non-personal data, and the Open Data Directive, into the Data Act.

The Data Act further sets out a switching and porting regime aimed at preventing vendor lock-in in respect of data processing in the cloud. The Commission has proposed new exemptions to switch between data processing services and a lighter regime for custom-made data processing services and those provided by SMEs.

“The overhaul of the Data Act is a significant step towards a more business-friendly and innovation-driven data economy in the EU,” Widmann said. “By consolidating overlapping rules, strengthening trade secret protections, and reducing burdens for SMEs, the reforms will make it easier for companies to share and use data. Businesses should review their data sharing practices and take advantage of new exemptions and streamlined compliance options.”

The legislative process ahead

The Commission said the measures it has outlined “should enable Europe’s businesses to spend less time on administrative work and compliance and more on innovating and scaling up” – and that they could save businesses up to €5 billion in administrative costs by 2029.

The proposals are subject to the usual EU law-making process, whereby they will be scrutinised by law makers in the European Parliament and Council of Ministers – those law makers must agree on the wording of the reforms through formal votes to adopt them, before they can be written into EU law and take effect.

Amsterdam-based Thijs Kelder of Pinsent Masons said: “These proposals mark a sea change in the EU’s regulatory approach and is a response to those policymakers and businesses around the world that have been critical of what they view as the negative effects of the proliferation of inconsistent regulatory obligations, especially on start-ups and SMEs. The measures, once adopted, will reduce the compliance burden on businesses, and increase the room for innovation within the EU without diluting the core protections of the digital acquis.”

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.