Parliamentary question - E-002203/2024(ASW)Parliamentary question
E-002203/2024(ASW)

Answer given by Executive Vice-President Virkkunen on behalf of the European Commission

Overall, the Commission is highly committed on reducing administrative burdens and simplifying legislation. The Artificial Intelligence (AI) Act[1] limits administrative burden placed on providers and deployers of AI systems.

Regulatory requirements are, for the most part, only imposed on a small number of AI systems deemed to be ‘high-risk’ for the health and safety of EU citizens.

The vast majority of AI systems is not subject to the main obligations of the Act. Moreover, for these few high-risk AI systems the requirements imposed on providers and deployers are limited.

For example, for many such systems conformity with obligations of the Act is limited to self-assessment, leading to simpler risk management.

Under the AI Act, Member States are obliged to support small and medium-sized enterprises (SMEs) providing AI systems by creating regulatory sandboxes to provide a controlled environment to facilitate development, training, and validation of AI systems.

A key aim of such sandboxes is to accelerate access to the EU market for AI systems of SMEs which have limited access to legal expertise. The AI Act also foresees delivery by Member States of training activities for SMEs about application of the Act.

The Commission is building on two existing initiatives of the Digital Europe Programme: European Digital Innovation Hubs (EDIHs)[2] and Testing and Experimentation Facilities (TEFs)[3].

Over 150 EDIHs have delivered over 20 000 services to SMEs. TEFs in four sectors have been launched to help SMEs test AI technologies while adhering to regulatory requirements.

Last updated: 21 January 2025
Legal notice - Privacy policy