Out-Law / Your Daily Need-To-Know

OUT-LAW ANALYSIS 12 min. read

What the ‘AI Omnibus’ holds for businesses

Inside the European Parliament in Brussels

Inside the European Parliament in Brussels, where MEPs will have their say on the AI Omnibus proposal. Thierry Monasse/Getty Images.


The ‘AI Omnibus’ proposal put forward by the European Commission in November 2025 aims to simplify EU AI Act implementation across the EU and to ensure the related regulatory and administrative burdens falling on businesses are proportionate to their size and the AI risk level.

However, the proposal introduces new uncertainties which businesses must navigate in the period prior to the approval and adoption of the updated rules.

Below, we look at the AI Omnibus proposal in more detail, explore its context, and examine how it might impact businesses.

Background to the proposals

A series of studies in recent years – including one by former European Central Bank president Mario Draghi – have raised concerns about the impact the EU’s approach to regulating technology and technology companies is having on innovation in the trading bloc and its competitiveness. The Commission’s promise to act on the recommendations of Draghi is reflected in the AI Omnibus proposals.

The AI Omnibus is part of a wider ‘digital simplification’ package the Commission put forward in November 2025. The package is aimed at reducing administrative burdens on all businesses by at least 25% by 2029, and by 35% for small and medium enterprises (SMEs) in particular. SMEs already receive special consideration under the AI Act.

The AI Omnibus presents a throng of proposed amendments simplifying implementation of selected AI Act provisions. These include, for example, changes to compliance timelines, regulatory concessions for small mid-cap companies (SMCs), as well as amendments to AI literacy obligations, to registration and post-market monitoring requirements, and to rules around regulatory sandboxes and real-world testing. Expansion of the AI Office’s powers is also provided for.

Although the intent behind the digital simplification package has been met with praise, some industry bodies such as MedTech Europe have expressed concerns regarding unresolved issues of clarity, fragmentation, and duplication within the AI regulatory framework.

Revised compliance timelines

The AI Omnibus is designed to smooth the AI Act’s rollout, including by re‑sequencing compliance deadlines to align with the dates when practical guidance and harmonised standards are expected to become available to businesses.

While some provisions under the AI Act are already effective – including rules on prohibited AI and rules relating to general-purpose AI (GPAI) models – most provisions are due to apply from 2 August 2026.

Among the most notable reforms due to take effect on 2 August 2026 are rules relating to ‘high-risk’ AI systems. However, the AI Omnibus proposal introduces a conditional start for this regime, tying initiation of the requirements to the Commission’s confirmation that the supporting harmonised standards and guidelines are in place. The proposals envisage six-month and 12-month lead-in periods for the rules relating to Annex III “stand‑alone” high‑risk systems and high‑risk AI embedded in or constituting regulated products to take effect, respectively, once standards and guidelines have been published. Backstop dates of 2 December 2027 and 2 August 2028 have been proposed for the respective rules to take effect irrespective of whether standards have been published by then.

The AI Omnibus also foresees a limited grace period for certain systems already on the market – for example, an extra six months, to 2 February 2027, to retrofit transparency measures for generative AI released before 2 August 2026.

These proposed timings emphasise the central role of standards in the AI Act structure, which confer a presumption of conformity once cited in the EU’s Official Journal. Standardisation bodies (CEN‑CENELEC) are, in parallel, working towards making key AI Act standards available in late 2026.

Because the AI Act’s general date of application remains 2 August 2026, EU co‑legislators are now racing against the clock to adopt the AI Omnibus in time for its adjusted timelines to take effect. In the interim, business should plan with the AI Act’s current structure and August 2026 milestone in mind, while monitoring the AI Omnibus process and the standards pipeline that will ultimately activate much of the ‘high‑risk’ regime.

Regulatory concessions for SMCs

In addition to revising compliance timelines, the AI Omnibus contains measures designed to make compliance with the AI Act easier for businesses, especially SMCs, without undermining the core objectives of the AI Act, namely ensuring safety, fundamental rights protection, and fostering trustworthy AI innovation.

Specifically, the AI Omnibus contains several concessions for SMCs which would align their treatment under the AI Act with that of SMEs. These include extended timelines for compliance with certain high-risk AI system requirements, simplified conformity assessment procedures, enhanced access to regulatory sandboxes, and changes to technical documentation and post-market monitoring requirements. In justifying the proposed concessions, the Commission explicitly acknowledged that disproportionate regulatory costs could hinder smaller companies from participating in the AI market. The Commission further accepted the need to enable a smooth transition for companies as they grow from SME size, by shielding them from the full set of rules applicable to larger entities.

From a structural perspective, these measures are integrated into the broader framework of the AI Act, which remains risk-based and largely technology-neutral. If adopted, the AI Omnibus would not alter the fundamental classification of AI systems or the core obligations for high-risk systems; rather, it would introduce procedural adaptations to make compliance more proportionate for smaller entities. For example, while all providers of high-risk AI systems must implement risk management and quality management systems, SMEs and SMCs may benefit from reduced documentation requirements.

The intended impact of these provisions is twofold: to maintain a high level of protection of fundamental rights, while avoiding market concentration by enabling smaller players to compete effectively. The Commission emphasises that without such adjustments, the cumulative compliance costs under the AI Act could disproportionately affect SMEs and SMCs, potentially stifling innovation and limiting diversity in the AI ecosystem. By contrast, larger entities would remain subject to the full procedural and substantive obligations without derogation.

The proposed SMC concessions reinforce the EU’s strategic goal of fostering an inclusive and competitive AI market. By embedding SME- and SMC-specific measures within the AI Act’s implementation framework, the AI Omnibus seeks to balance regulatory rigour with economic pragmatism. These adjustments align with a broader policy trend in EU digital regulation: safeguarding fundamental rights and systemic safety while promoting innovation and market access for smaller economic actors.

Revised AI literacy obligations

The proposed AI Omnibus would also introduce a significant change to the AI literacy obligations set out in Article 4 of the AI Act.

Under the current framework, providers and deployers of AI systems are required to ensure that their staff possess an adequate level of AI literacy to comply with the Act’s requirements. The AI Omnibus proposes to remove this direct obligation from economic operators and to instead place the responsibility on the Commission and member states to foster AI literacy at a systemic level. This adjustment denotes a policy choice to centralise educational and awareness-raising efforts, potentially reducing compliance burdens on individual businesses.

In the explanatory memorandum published alongside the proposal, the Commission acknowledged that requiring providers and deployers to ensure AI literacy internally could create significant administrative and financial burdens, especially in sectors where AI expertise is scarce. By shifting the obligation to public authorities, the AI Omnibus aims to promote a more uniform and accessible approach to AI literacy across the EU.

For larger entities, the practical effect of this change may be limited, as many already invest in internal AI training programs to manage operational and reputational risks. However, for smaller providers and deployers, the shift represents a meaningful reduction in regulatory complexity.

Registration exemptions for certain high-risk systems

The AI Omnibus also contains proposals to reduce registration burdens for providers of AI systems used in high-risk areas where providers have concluded, based on self-assessments, that their systems do not actually pose significant risks to fundamental rights, democracy or safety given the nature of the specific tasks performed. In that scenario, under the new proposals, current requirements for system providers to register themselves and their systems in the relevant EU database would be removed.

Specifically, Article 6(3) provides that AI systems otherwise defined as high-risk may be exempted from high-risk classification under any of four conditions – including where the system is “intended to perform a narrow procedural task”, provided it does not perform profiling of natural persons, which is always deemed high-risk. Even where this exemption applies, Article 49(2) as currently drafted states that before such systems may be placed on the market or into service, providers must complete the registration process. This ensures that all high-risk AI systems are identifiable even if self-assessments conducted under Article 6(3) are wrong.

The Commission has proposed to delete the Article 49(2) registration requirement on the basis that the requirement imposes a “disproportionate compliance burden” in the absence of genuine high-risk. This move supports greater simplicity and consistency across the AI Act by helping to ensure that non-high-risk systems are not subject to, and need not be concerned with, requirements intended for high-risk systems.

Crucially, however, under Article 6(4), providers of these exempted systems will remain obligated to document their assessments of non-high-risk status and may be called upon to provide this assessment to national competent authorities. It therefore appears possible that, even if the Article 49(2) requirements are deleted, national authorities could still disagree with providers’ self‑assessments, potentially resulting in penalties for providers that fail to register those systems, not to mention any other failures the authorities may find with respect to other requirements under the high-risk AI systems regime.

Businesses should therefore continue to ensure that their risk classification assessment criteria and documentation are clear and thorough, particularly when asserting exemption from high-risk classification under Article 6(3). They should also consider and document the circumstances under which that assessment would be expected to change, like in instances of added features or scope creep.

Revised post-market monitoring requirements

The AI Omnibus would also provide greater flexibility for true high-risk AI systems through removal of the AI Act’s demand for “harmonised” plans for post-market monitoring.

Post-market monitoring is and will remain a critical obligation for providers of high-risk systems. Those monitoring requirements are set out in Chapter IX, Section 1 of the Act and, for example, require gathering and tracking of system performance data and incidents not only during system development and training but also ‘post-market’, after full deployment. Such requirements work to ensure continuous compliance with high-risk system obligations under the Act and to prevent ‘model decay’, which can affect accuracy and system bias over time.

As currently drafted, Article 72(3) of the AI Act empowers the Commission to adopt an implementing act dictating detailed provisions for a uniform post-market monitoring plan template. This provision essentially paves the way for a mandatory, one-size-fits-all formula for the post-market monitoring plan which high-risk AI systems providers would then be required adopt.

At the moment there are numerous performance metrics that can be used to measure system robustness and fairness. Appropriate metrics must be selected in consideration of the system’s particular function and in line with the provider’s organisational values and principles. A requirement for uniform post-market monitoring of system performance could therefore risk imposing higher or lower burdens than actually required to ensure effective and ethical AI systems.

With its AI Omnibus proposals, the Commission intends to limit its powers to the adoption of ‘guidance’ on the contents of the post-market monitoring plan. This restriction aims to provide greater flexibility to enable AI systems providers to individually tailor their plans for post-market monitoring. This change also reflects the Commission’s increasing investment in guidance versus AI Act amendments. It is currently in the process of developing guidelines regarding risk classification, the application of high-risk requirements, and the AI Act’s interplay with other EU legislation, for example.

Although many checks on high-risk AI must apply uniformly, the AI Omnibus proposals are sympathetic to the notion that measuring system performance involves case-specific factors and metrics. This is consistent with the broader language of Article 15(1) which requires high-risk AI systems to be designed and developed to demonstrate an “appropriate level” of accuracy, robustness and cybersecurity and to “perform consistently in those respects throughout their lifecycle”.

Businesses should note that other reporting and monitoring requirements contained in the AI Act may not be subject to equivalent changes and therefore may continue to contain more rigid requirements. For example, Article 12 relates to record-keeping over the lifetime of the AI system and sets out additional uniformly applicable requirements. Article 15 also states that the Commission shall, in cooperation with stakeholders and benchmarking authorities, encourage the development of benchmarks and measurement methodologies.

Updated rules for regulatory sandboxes and real-world testing

The AI Omnibus further proposes changes to the AI Act rules surrounding regulatory sandboxes and real-world testing, to create greater access to these tools prior to deployment of complex or novel AI systems.

According to the Commission, AI regulatory sandboxes offer a “controlled environment” for providers to test their AI systems under supervision by national competent authorities; they essentially help mitigate non-compliance risks through opportunities for experimentation and open dialogue with regulators during a limited period before systems are placed on the market or put into service pursuant to specific, agreed sandbox plans. The current AI Act provisions on regulatory sandboxes are found in in Articles 57-59.

The AI Omnibus envisages a significant expansion of regulatory sandboxes. While the AI Act currently requires member states to establish at least one sandbox at the national level, the AI Omnibus would provide a legal basis for the AI Office to introduce EU-level sandboxes, for which priority access would be granted to SMEs. The proposed amends further empower the Commission to adopt implementing acts imposing specific regulatory sandbox requirements and requiring greater member state coordination of national sandboxes, to avoid fragmentation. Notably, however, the Commission’s proposals do not entail amendment of Article 82, which empowers national authorities to impose additional requirements on compliant high-risk systems. This would effectively leave in place existing risks of divergent national practices during enforcement.

Separately, the AI Act also currently allows for a measure of real-world testing under Article 60. In contrast to the controlled environments of regulatory sandboxes, real-world testing involves testing in real operational environments before the system is formally placed on the market. This is useful for systems that require exposure to real users, actual data, or complex dynamic environments to obtain practical validation.

Under the AI Omnibus proposals, real-world testing would be permitted not only for high-risk AI systems listed in Annex III – where certain categories or uses of systems are deemed ‘high-risk’ by default – but also for some high-risk AI systems covered by Union harmonised legislation, as listed in Annex I Section A of the AI Act – like medical devices, industrial machinery and toys. For systems covered by Union harmonised legislation listed instead in Annex I Section B – including aircraft and vehicles – a new Article 60a has been proposed which would permit real-world testing subject to voluntary written agreements between member states and the Commission, further broadening the possibilities for real-world testing outside regulatory sandboxes.

When considering whether to participate in regulatory sandboxes or real-world testing, businesses should be aware of the remaining limitations of these tools.

Successful participation in regulatory sandboxes and some real-world testing can serve as evidence of compliance for purposes of conformity assessments or market surveillance activities. Specifically, the AI Act states that proof of such participation will be “taken positively into account”. While this may accelerate conformity assessments, the AI Act and AI Omnibus each fall short of granting a presumption of conformity based on sandbox participation or real-world testing. Presumptions of conformity are available based on demonstrated compliance with certain other EU regulatory frameworks, so it is perhaps surprising that regulatory sandboxes and real-world testing still have not been given similar standing.

Expansion of AI Office powers

Several implementation challenges have emerged since adoption of the AI Act, including delays in designating national competent authorities and the need for consistent oversight of ‘high-impact’ AI systems, such as those embedded in ‘very large online platforms’ or those based on popular GPAI models.

To address these challenges, the Commission has proposed in the AI Omnibus to centralise enforcement in the hands of the AI Office and to enhance its supervisory powers. In doing so, it aims to address the risk of fragmented national enforcement and to provide a coherent approach to monitoring complex AI ecosystems. This is particularly relevant for cross-border services and GPAI models ‘with systemic risks’, which may be ill-suited for effective management by individual member states.

In practical terms, the AI Office will be the responsible supervisory authority for “all AI systems based on general-purpose AI models developed by the same provider”, and it is granted the appropriate powers to effectively conduct the tasks and responsibilities of market surveillance authorities under the AI Act, subject to further specification in forthcoming implementing or delegated acts. The AI Omnibus proposal therefore seeks to pivot from the decentralised enforcement structure currently provided for under the AI Act to a more centralised alternative, when it comes to the most impactful AI models and systems.

Prepare to comply while expecting reform

The AI Omnibus proposals are, so far, only that – the Commission’s recommended reforms are subject to change – and it is possible that existing AI Act requirements and timelines remain in effect. Businesses must therefore plan for the worst while hoping that the legislative process ahead results in positive outcomes for them. An immediate focus in this regard should be preparation for new AI Act requirements – including new rules relating to ‘high-risk’ AI systems – to take effect on 2 August 2026. However, businesses should also monitor for developments in relation to the adoption of the AI Omnibus proposals, which may ultimately delay these deadlines or render some preparations obsolete.

Co-written by Carissa Wilson of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.