On August 2, 2026 — less than three months from now — the most consequential tranche of the EU AI Act is scheduled to begin applying. Obligations for so-called high-risk AI systems, covering everything from CV-screening tools and credit-scoring engines to safety components in critical infrastructure, will kick in across the bloc. Yet rather than a triumphant on-time landing, Brussels finds itself in the middle of a genuine debate about whether the regime as drafted is workable — and whether parts of it should be paused, simplified, or softened through the European Commission's Digital Omnibus package.
This is not a fringe lobbying campaign. It is a serious policy conversation, and the Commission is right to be having it.
How we got here
The AI Act entered into force on August 1, 2024, with obligations phased in over several years. Prohibited-use rules came online in February 2025. Obligations on providers of general-purpose AI (GPAI) models began to apply on August 2, 2025, paired with a voluntary GPAI Code of Practice finalised in July 2025 to help frontier model developers demonstrate compliance. OpenAI, Google, Microsoft and Anthropic signed on. Meta, notably, declined — arguing the Code went beyond what the statute itself required.
The 2026 deadline is the big one. Annex III high-risk categories alone touch a vast slice of the European digital economy: HR tech, edtech, fintech, insurance, healthcare triage, biometric identification, and large parts of the public sector. Providers and many deployers will need quality management systems, technical documentation, post-market monitoring, conformity assessments and registration in an EU database — built and audited before they ship.
Why the Omnibus is the right instinct
The Digital Omnibus, proposed by the Commission in late 2025, is an explicit acknowledgement that Europe's stacked digital rulebook — AI Act, GDPR, Data Act, DSA, DMA, NIS2 — has produced overlapping reporting burdens, ambiguous interactions and a compliance bill that disproportionately hits smaller players. President Ursula von der Leyen and Executive Vice-President Henna Virkkunen have framed the simplification agenda as a competitiveness imperative rather than a deregulatory one, echoing the diagnosis in Mario Draghi's 2024 report on EU competitiveness.
That instinct is correct. A rule that cannot be implemented on time, by the firms it is supposed to govern, is not a strong rule — it is a paper rule. And paper rules damage the credibility of the entire regulatory project.
Three pressure points have emerged:
- Standards are not ready. The harmonised standards being drafted by CEN-CENELEC's JTC 21, which provide the practical presumption of conformity for high-risk systems, have slipped repeatedly. Without them, providers face conformity assessments against an abstract legal text.
- The AI Office is still scaling. The Commission's new AI Office, the central enforcer for GPAI rules, has been hiring through 2025 but is not yet at full capacity. National market-surveillance authorities are in even earlier stages of build-out.
- Definitional uncertainty. Key concepts — what counts as a "substantial modification" of a high-risk system, where the line between provider and deployer sits, how open-source carve-outs interact with downstream fine-tuning — remain contested.
What a sensible recalibration looks like
A targeted, transparent delay or grace period on enforcement of high-risk obligations — say, six to twelve months — would not be a surrender to industry lobbying. It would be a pragmatic admission that the supporting infrastructure (standards, guidance, notified bodies, the AI Office itself) is not yet where it needs to be. The United Kingdom's lighter-touch, regulator-led approach and the United States' executive-order-and-NIST framework offer competing models; if EU firms find Brussels' rules unimplementable while their American and British rivals scale, the long-term loser is European AI capacity.
What the Omnibus should not do is hollow out the substance. The prohibitions on social scoring and untargeted scraping for facial-recognition databases, the transparency duties on synthetic content, and the systemic-risk obligations on the most capable GPAI models are well-targeted and politically settled. The GPAI Code of Practice, for all its imperfections, has produced something genuinely useful: a workable compliance template that four of the world's leading model developers have signed. That achievement should be reinforced, not relitigated.
The bigger picture
Europe's regulatory ambition on AI has been admirable, and the Act's risk-based architecture remains a more defensible model than blanket bans or laissez-faire. But ambition without operational readiness is how good laws get discredited. A short, candid extension of the high-risk timeline — coupled with finalised standards, clearer guidance on the provider/deployer split, and a serious SME compliance pathway — would strengthen the Act, not weaken it.
The choice in front of Brussels is not regulation versus deregulation. It is whether the EU's flagship AI law arrives in August 2026 as a credible, enforceable regime or as a missed deadline papered over with discretion. A confident regulator delays when delay is warranted, and enforces when enforcement is ready. That is the version of the AI Act worth defending.