Why delaying AI Act enforcement is essential for European SMEs

This is an op-ed by Sebastiano Toffaletti, DIGITAL SME Secretary General.

In ten months, the EU will begin enforcing the AI Act’s high-risk obligations on European companies. But as the deadline approaches, critical tools necessary for implementation are missing. The EU seems to be rushing towards enforcement it is not prepared for – and European SMEs struggle to achieve compliance.

It is clear that the AI Act’s ambitious goals are worthy of support. But ambition without readiness has the potential to stifle European AI innovation, not strengthen it. The regulation promised a comprehensive framework of implementation tools: harmonized standards, regulatory sandboxes, national authorities, and guidance documents. These aren’t ‘nice-to-have’. They’re the foundation that makes compliance possible, especially for small and medium-sized enterprises that lack the vast resources of large companies. And right now, that foundation is incomplete.

The Infrastructure Gap

Here’s where we stand today: of the 45 technical standards needed, only 15 have been published by CEN-CENELEC and none yet by the Commission. Even under optimistic projections, nearly half won’t be ready when high-risk obligations take effect in August 2026. Companies are supposed to demonstrate compliance with standards that don’t exist yet.

The following chart illustrates the current state of development of Artificial Intelligence standards as of October 2025.

Of the 45 standards foreseen for publication, only 15 have been officially published.

The situation with regulatory sandboxes is even more stark. These controlled testing environments are supposed to give companies – especially small and medium-sized ones – a safe space to validate their AI systems and prepare for compliance. The Act explicitly gives SMEs priority access. But across the EU’s 27 member states, by May 2025 only Spain had a sandbox ready – albeit not fully tested yet – and Denmark, France and Luxembourg planned to adapt other existing national sandboxes for the AI Act.

Ten member states haven’t even proposed legislation to create one. Others are stuck in long legislative processes. Meanwhile, the dedicated EU project supporting sandbox rollout won’t deliver its final guidance until July 2026 – giving member states exactly one month before the enforcement deadline.

The story is similar for national authorities. Member states were required to designate market surveillance authorities to oversee compliance and provide guidance. Only eight have done so. It is difficult for an SME to consult with an authority that hasn’t even been appointed yet.

As depicted in the chart, the European Commission’s official data highlight that only eight Member States have clearly designated market surveillance authorities, while the others are either still in the process or have not yet initiated it.

Who Bears the Cost?

In this situation, European entrepreneurs face an impossible choice: guess at compliance and risk costly rework, or delay innovation under legal uncertainty. Large foundation model providers with dedicated compliance teams can absorb these burdens but SMEs cannot.

Premature enforcement would impose unsustainable costs and legal ambiguity on thousands of European innovators – precisely the actors Europe needs to succeed in AI. As Mario Draghi emphasised on the anniversary of his competitiveness report: enforcement must not outpace readiness.

A Practical Solution

To avoid these negative effects, Europe should embrace a simple fix: enforcement of high-risk AI obligations for SMEs should begin no earlier than six months after all relevant standards, regulatory sandboxes, and related guidance are operational.

Six months are what companies need to review standards, test systems in sandboxes, and implement required measures properly. This approach doesn’t weaken the AI Act: it strengthens it by ensuring compliance is possible, not punitive.

Today, the digital rulebook is more than legislation – it’s a strategic geopolitical instrument. For the EU to maintain credibility and global influence, it must deliver a regulatory framework that not only upholds fairness but actively enables innovation. We must resist pressure from powerful incumbents and politically motivated narratives that risk distorting the internal market.

The Clock Is Ticking

The pressure to move forward is understandable. The AI Act represents years of work, countless negotiations, real political capital. No one wants to be seen as delaying such an important piece of legislation.

But effective regulation requires alignment between legal frameworks and implementation capacity. Without this alignment, the AI Act risks becoming a competitive disadvantage rather than the strategic asset it was designed to be – weakening precisely the SME ecosystem that Europe’s technological sovereignty depends upon.

Without proper implementation, both the intent of the Regulation and SMEs’ ability to meet requirements are at risk. Premature enforcement would jeopardise Europe’s digital sovereignty, weaken its innovation ecosystem, and ultimately undermine the goals of the legislation itself.

For detailed analysis of implementation gaps, including data on standards development and sandbox readiness, see the report.

CONTACT US