Artificial intelligence act

In “A Europe Fit for the Digital Age”

PDF version

The uptake of Artificial Intelligence (AI) systems has a strong potential to bring societal benefits, economic growth and enhance EU innovation and global competitiveness. At the same time, it is commonly acknowledged that the specific characteristics of certain AI systems raise some concerns especially with regard to safety, security and fundamental rights protection. Against this background, the European Commission unveiled a proposal for a new Artificial Intelligence Act (AI Act) in April 2021.

The Commission proposes to enshrine in EU law a technology-neutral definition of AI systems. The Commission proposes as well to adopt different set of rules tailored on a risk-based approach with four levels of risks:

  • Unacceptable risk AI. Harmful uses of AI that contravene EU values (such as social scoring by governments) will be banned because of the unacceptable risk they create; 
  • High-risk AI. A number of AI systems (listed in an Annex) that are creating adverse impact on people's safety or their fundamental rights are considered to be high-risk. In order to ensure trust and consistent high level of protection of safety and fundamental rights, a range of mandatory requirements (including a conformity assessment) would apply to all high-risks systems; 
  • Limited risk AI. Some AI systems will be subject to a limited set of obligations (e.g. transparency); 
  • Minimal risk AI. All other AI systems can be developed and used in the EU without additional legal obligations than existing legislation.

The proposal is now being discussed by the co-legislators, the European Parliament and the Council (for further information see below the legislative briefing).

The Council has adopted its common position (‘general approach’) on the AI Act on 6 December 2022. The Council’s text, inter alia:

  • narrows down the definition to systems developed through machine learning approaches and logic- and knowledge-based approaches;
  • extends to private actors the prohibition on using AI for social scoring;
  • adds a horizontal layer on top of the high-risk classification to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured;
  • clarifies the requirements for high-risk AI systems;
  • adds new provisions to account of situations where AI systems can be used for many different purposes (general purpose AI);
  • clarifies the scope of the AI act (e.g. explicit exclusion of national security, defence, and military purposes from the scope of the AI Act) and provisions relating to law enforcement authorities;
  • simplifies the compliance framework for the AI Act;
  • adds new provisions to increase transparency and allow users' complaints;
  • substantially modifies the provisions concerning measures in support of innovation (e.g. AI regulatory sandboxes).

In Parliament, the discussions are led by the Committee on Internal Market and Consumer Protection (IMCO; rapporteur: Brando Benifei, S&D, Italy) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE; rapporteur:  Dragos Tudorache, Renew, Romania) under a joint committee procedure. Parliament adopted its negotiating position (499 votes in favour, 28 against and 93 abstentions) on June 2023 with substantial amendments to the Commission's text, including inter alia,

  • MEPs amended the definition of AI systems to align it with the definition agreed by the Organisation for Economic Co-operation and Development (OECD).
  • MEPs substantially amended the list of AI systems prohibited in the EU.  Parliament wants to ban the use of biometric identification systems in the EU for both real-time and ex-post use (except in cases of severe crime and pre-judicial authorisation for ex-post use) and not only for real time use as proposed by the Commission.  Furthermore, Parliament wants to ban all biometric categorisation systems using sensitive characteristics,  predictive policing systems, emotion recognition systems and AI systems using indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.
  • While the Commission proposed to automatically categorise as high-risk all systems falling in certain areas or use cases, Parliament adds the additional requirement that the systems must pose a 'significant risk' to qualify as high-risk. Furthermore, Parliament imposes on those deploying a high-risk system in the EU to carry out a fundamental rights impact assessment including a consultation with the competent authority and relevant stakeholders.
  • Parliament wants to enshrine in the AI Act a layered approach to regulate general-purpose AI systems. Parliament wants to impose an obligation on providers of foundation models to ensure a robust protection of fundamental rights, health, safety, the environment, democracy and the rule of law. Furthermore, generative foundation AI models (such as Chat GPT) that use large language models to generate art, music and other content would be subject to stringent transparency obligations. Finally, all foundation models should provide all necessary information for downstream providers to be able to comply with their obligations under the AI Act.
  • National authorities competences have been strengthened and Parliament proposes also to establish an AI Office, a new EU body to support the harmonised application of the AI Act, provide guidance and coordinate joint cross-border investigations.
  • In order to support innovation, Parliament agrees that research activities and the development of free and open-source AI components would be largely exempted from compliance with the AI Act rules.

EU lawmakers have started  the negotiations to finalise the new legislation. Trilogue meetings took place in June, July, September and October 2023. Protracted negotiations are on-going. The next round of trilogue discussions will take place on 6 December 2023.


Further reading:

Author: Tambiama Madiega, Members' Research Service,

As of 23/11/2023.