Why delaying AI Act enforcement is essential for European SMEs

This is an op-ed by Sebastiano Toffaletti, DIGITAL SME Secretary General.

In ten months, the EU will start enforcing the AI Act against companies deploying “high-risk” solutions. Yet, as the deadline approaches, key tools needed for implementation are still missing. The EU seems to be rushing toward enforcement it isn’t ready for — while European SMEs struggle to comply.

The AI Act’s ambition to make technology in Europe trustworthy and human-centred deserves full support. But ambition without readiness has the potential to stifle European AI innovation, not strengthen it. The regulation promised a comprehensive framework of implementation tools: harmonised standards, regulatory sandboxes, national enforcement authorities, and guidance documents. These are not “nice-to-haves” — they are the foundation that makes compliance possible, particularly for small and medium-sized enterprises that lack the dedicated resources larger companies enjoy. And right now, that foundation remains incomplete.

The enforcement gap

Here’s where we stand today: of the 45 technical standards needed, only 15 have been published. Even under optimistic projections, nearly half won’t be ready when high-risk obligations take effect in August 2026. Companies are expected to demonstrate compliance with standards that do not yet exist.

The following chart illustrates the current state of development of Artificial Intelligence standards as of October 2025.

Of the 45 standards foreseen for publication, only 15 have been officially published.

The situation with regulatory sandboxes is even more stark. These controlled testing environments are supposed to give companies – especially small and medium-sized ones – a safe space to validate their AI systems and prepare for compliance. The Act explicitly gives SMEs priority access. But across the EU’s 27 member states, only Spain has a sandbox ready.

Ten member states haven’t even proposed legislation to create one. Others are stuck in long legislative processes. Meanwhile, the dedicated EU initiative supporting sandbox rollout in Member States – EUSAiR, won’t deliver its final guidance until July 2026 – giving member states exactly one month before the enforcement deadline.

National enforcement authorities have a similar story. The AI Act requires Member States to designate market surveillance authorities in charge of overseeing compliance and providing guidance. Only eight have done so. SMEs in the other twenty-one countries do not even know which authorities they should consult.

As depicted in the chart, the European Commission’s official data highlight that only eight Member States have clearly designated market surveillance authorities, while the others are either still in the process or have not yet initiated it.

Who bears the cost?

In this situation, European entrepreneurs face an impossible choice: design their AI solutions guessing at compliance and risking costly rework, or delay innovation under legal uncertainty. Large foundation model providers with dedicated compliance teams can absorb these burdens, but SMEs cannot.

Premature enforcement would impose unsustainable costs and legal ambiguity on thousands of European innovators – precisely the actors Europe needs to succeed in AI. As Mario Draghi emphasised on the anniversary of his competitiveness report, enforcement must not outpace readiness.

A practical solution

To avoid these negative effects, Europe should embrace a simple fix: enforcement of high-risk AI obligations for SMEs should begin no earlier than six months after all relevant standards, regulatory sandboxes, and related guidance are operational.

SMEs need enough time to review standards, test systems in sandboxes, and implement required measures properly. This approach does not weaken the AI Act: it strengthens it by ensuring compliance is possible, not punitive.

Today, the digital rulebook is more than legislation – it’s a strategic geopolitical instrument. For the EU to maintain credibility and global influence, it must deliver a regulatory framework that not only upholds fairness but also actively enables innovation. We must resist pressure from powerful incumbents and politically motivated narratives that risk distorting the internal market.

The clock is ticking

The pressure to move forward is understandable. The AI Act represents years of work, countless negotiations, and real political capital. No one wants to be seen as delaying it.

But effective regulation requires alignment between legal frameworks and implementation capacity. Without this alignment, the AI Act risks becoming a competitive disadvantage rather than the strategic asset it was designed to be – weakening precisely the SME ecosystem that Europe’s technological sovereignty depends upon.

Without proper implementation, both the intent of the Regulation and SMEs’ ability to meet requirements are at risk. Premature enforcement would jeopardise Europe’s digital sovereignty, weaken its innovation ecosystem, and ultimately undermine the goals of the legislation itself.

For a detailed analysis of implementation gaps, including data on standards development and sandbox readiness, check the report below.

CONTACT US