Tiered based regulation of AI foundation models to support SME innovation

EU decision-makers have signaled that they could reach a final agreement on the AI Regulation (AI Act) this Wednesday, December 6. Ahead of this crucial round of negotiations, the European DIGITAL SME Alliance is calling on EU decision-makers to fully consider the needs of small and medium-sized enterprises (SMEs), which form the backbone of the Digital Single Market.

The upcoming round of negotiations in the AI Act was accompanied by a non-paper published by the governments of Germany, France and Italy. The non-paper contains a proposal for the self-regulation of providers of AI foundation models. DIGITAL SME welcomes deregulation for providers of AI foundation models coming from start-ups and medium-sized ICT companies. However, large, dominant foundation model providers should be regulated, as they would otherwise shift the responsibility for compliance to downstream users, especially SMEs.

Big Tech companies that develop very large base models provide developers with ready-made models that they can adapt for the development of new innovative AI products. The associations believe that these providers of base models should have to undergo a third-party conformity assessment to ensure a fair distribution of responsibilities. In this way, the regulation would ensure that the smaller deployers of the foundation models are not burdened with high compliance costs and thus lower the market entry barrier for SMEs. On the other hand, the development of new foundation models from Europe must not be hindered, so that the digital economy, which is characterized by SMEs, is not overregulated.

Essentially, we should only regulate applications on a risk basis. The basic models on which applications are based should also only be regulated if they dominate the market,” says Dr. Oliver Grün, President of the European DIGITAL SME Alliance.

For such a solution to be effective, the term ‘very large foundation models’ must be precisely defined. Following the approach of the EU Digital Markets Act, it could include three different, complementary quantitative thresholds: a) computing power, b) number of end users, c) number of business users. This definition would not only address regulatory concerns, but also concerns about European innovation capacity. In this way, the EU can find the right balance between safety and innovation.

A precise definition for very large base models should also be combined with a precise definition of high-risk applications in the final text. According to researchers, under the original definition, up to 58% of AI systems could be classified as high-risk and subject to significant compliance requirements. This would have a far-reaching impact on the competitiveness of many innovative SMEs with their own development in the field of AI, which would be forced out of the market as a result.

Contact Us