REPORT on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts

22.5.2023 - (COM(2021)0206 – C9‑0146/2021 – 2021/0106(COD)) - ***I

Committee on the Internal Market and Consumer Protection
Committee on Civil Liberties, Justice and Home Affairs
Rapporteur: Brando Benifei, Ioan-Dragoş Tudorache
(Joint committee procedure – Rule 58 of the Rules of Procedure)
Rapporteurs for the opinions of associated committees pursuant to Rule 57 of the Rules of Procedure:
Eva Maydell, Committee on Industry, Research and Energy
Marcel Kolaja, Committee on Culture and Education
Axel Voss, Committee on Legal Affairs
PR_COD_1amCom


Procedure : 2021/0106(COD)
Document stages in plenary

DRAFT EUROPEAN PARLIAMENT LEGISLATIVE RESOLUTION

on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts

(COM(2021)0206 – C9‑0146/2021 – 2021/0106(COD))

(Ordinary legislative procedure: first reading)

The European Parliament,

 having regard to the Commission proposal to Parliament and the Council (COM(2021)0206),

 having regard to Article 294(2) and Articles 16 and  114 of the Treaty on the Functioning of the European Union, pursuant to which the Commission submitted the proposal to Parliament (C9‑0146/2021),

 having regard to Article 294(3) of the Treaty on the Functioning of the European Union,

 having regard to Rule 59 of its Rules of Procedure,

 having regard to the joint deliberations of the Committee on Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs under Rule 58 of the Rules of Procedure,

 having regard to the opinion of the Committee on Industry, Research and Energy, the Committee on Culture and Education, the Committee on Legal Affairs,the Committee on the Environment, Public Health and Food Safety and the Committee on Transport and Tourism,

 having regard to the report of the Committee on Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs (A9-0188/2023),

1. Adopts its position at first reading hereinafter set out;

2. Calls on the Commission to refer the matter to Parliament again if it replaces, substantially amends or intends to substantially amend its proposal;

3. Instructs its President to forward its position to the Council, the Commission and the national parliaments.


 

Amendment  1

 

Proposal for a regulation

Citation 4 a (new)

 

Text proposed by the Commission

Amendment

 

Having regard to the opinion of the European Central Bank,

Amendment  2

 

Proposal for a regulation

Citation 4 b (new)

 

Text proposed by the Commission

Amendment

 

Having regard to the joint opinion of the European Data Protection Board and the European Data Protection Supervisor;

Amendment  3

 

Proposal for a regulation

Recital 1

 

Text proposed by the Commission

Amendment

(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.

(1) The purpose of this Regulation is to promote the uptake of human centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects of artificial intelligence systems in the Union while supporting innovation and improving the functioning of the internal market. This Regulation lays down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence in conformity with Union values and ensures the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of Artificial Intelligence systems (AI systems), unless explicitly authorised by this Regulation. Certain AI systems can also have an impact on democracy and rule of law and the environment. These concerns are specifically addressed in the critical sectors and use cases listed in the annexes to this Regulation.

Amendment  4

 

Proposal for a regulation

Recital 1 a (new)

 

Text proposed by the Commission

Amendment

 

(1 a) This Regulation should preserve the values of the Union facilitating the distribution of artificial intelligence benefits across society, protecting individuals, companies, democracy and rule of law and the environment from risks while boosting innovation and employment and making  the Union a leader in the field

Amendment  5

 

Proposal for a regulation

Recital 2

 

Text proposed by the Commission

Amendment

(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.

(2) AI systems can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is trustworthy and safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation, innovation, deployment and uptake of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU).

Amendment  6

 

Proposal for a regulation

Recital 2 a (new)

 

Text proposed by the Commission

Amendment

 

(2 a) As artificial intelligence often relies on the processing of large volumes of data, and many AI systems and applications on the processing of personal data, it is appropriate to base this Regulation on Article 16 TFEU, which enshrines the right to the protection of natural persons with regard to the processing of personal data and provides for the adoption of rules on the protection of individuals with regard to the processing of personal data.

Amendment  7

 

Proposal for a regulation

Recital 2 b (new)

 

Text proposed by the Commission

Amendment

 

(2 b) The fundamental right to the protection of personal data is safeguarded in particular by Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive 2016/680. Directive 2002/58/EC additionally protects private life and the confidentiality of communications, including providing conditions for any personal and non-personal data storing in and access from terminal equipment. Those legal acts provide the basis for sustainable and responsible data processing, including where datasets include a mix of personal and nonpersonal data. This Regulation does not seek to affect the application of existing Union law governing the processing of personal data, including the tasks and powers of the independent supervisory authorities competent to monitor compliance with those instruments. This Regulation does not affect the fundamental rights to private life and the protection of personal data as provided for by Union law on data protection and privacy and enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’).

Amendment  8

 

Proposal for a regulation

Recital 2 c (new)

 

Text proposed by the Commission

Amendment

 

(2 c) Artificial intelligence systems in the Union are subject to relevant product safety legislation that provides a framework protecting consumers against dangerous products in general and such legislation should continue to apply. This Regulation is also without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety, including including Regulation (EU) 2017/2394, Regulation (EU) 2019/1020 and Directive 2001/95/EC on general product safety and Directive 2013/11/EU.

Amendment  9

 

Proposal for a regulation

Recital 2 d (new)

 

Text proposed by the Commission

Amendment

 

(2 d) In accordance with Article 114(2) TFEU, this Regulation complements and should not undermine the rights and interests of employed persons. This Regulation should therefore not affect Union law on social policy and national labour law and practice, that is any legal and contractual provision concerning employment conditions, working conditions, including health and safety at work and the relationship between employers and workers, including information, consultation and participation. This Regulation should not affect the exercise of fundamental rights as recognised in the Member States and at Union level, including the right or freedom to strike or to take other action covered by the specific industrial relations systems in Member States, in accordance with national law and/or practice. Nor should it affect concertation practices, the right to negotiate, to conclude and enforce collective agreement or to take collective action in accordance with national law and/or practice. It should in any event not prevent the Commission from proposing specific legislation on the rights and freedoms of workers affected by AI systems.

 

Amendment  10

 

Proposal for a regulation

Recital 2 e (new)

 

Text proposed by the Commission

Amendment

 

(2 e) This Regulation should not affect the provisions aiming to improve working conditions in platform work set out in Directive ... [COD 2021/414/EC].

 

Amendment  11

 

Proposal for a regulation

Recital 2 f (new)

 

Text proposed by the Commission

Amendment

 

(2 f) This Regulation should help in supporting research and innovation and should not undermine research and development activity and respect freedom of scientific research. It is therefore necessary to exclude from its scope AI systems specifically developed for the sole purpose of scientific research and development and to ensure that the Regulation does not otherwise affect scientific research and development activity on AI systems. Under all circumstances, any research and development activity should be carried out in accordance with the Charter, Union law as well as the national law;

Amendment  12

 

Proposal for a regulation

Recital 3

 

Text proposed by the Commission

Amendment

(3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.

(3) Artificial intelligence is a fast evolving family of technologies that can and already contributes to a wide array of economic, environmental and societal benefits across the entire spectrum of industries and social activities if developed in accordance with relevant general principles in line with the Charter and the values on which the Union is founded. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, food safety, education and training, media, sports, culture, infrastructure management, energy, transport and logistics, crisis management, public services, security, justice, resource and energy efficiency, environmental monitoring, the conservation and restoration of biodiversity and ecosystems and climate change mitigation and adaptation.

Amendment  13

 

Proposal for a regulation

Recital 3 a (new)

 

Text proposed by the Commission

Amendment

 

(3 a) To contribute to reaching the carbon neutrality targets, European companies should seek to utilise all available technological advancements that can assist in realising this goal. Artificial Intelligence is a technology that has the potential of being used to process the ever-growing amount of data created during industrial, environmental, health and other processes. To facilitate investments in AI-based analysis and optimisation tools, this Regulation should provide a predictable and proportionate environment for low-risk industrial solutions.

Amendment  14

 

Proposal for a regulation

Recital 4

 

Text proposed by the Commission

Amendment

(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial.

(4) At the same time, depending on the circumstances regarding its specific application and use, as well as the level of technological development, artificial intelligence may generate risks and cause harm to public or private interests and fundamental rights of natural persons that are protected by Union law. Such harm might be material or immaterial, including physical, psychological, societal or economic harm.

Amendment  15

 

Proposal for a regulation

Recital 4 a (new)

 

Text proposed by the Commission

Amendment

 

(4 a) Given the major impact that artificial intelligence can have on society and the need to build trust, it is vital for artificial intelligence and its regulatory framework to be developed according to Union values enshrined in Article 2 TEU, the fundamental rights and freedoms enshrined in the Treaties, the Charter, and international human rights law. As a pre-requisite, artificial intelligence should be a human-centric technology. It should not substitute human autonomy or assume the loss of individual freedom and should primarily serve the needs of the society and the common good. Safeguards should be provided to ensure the development and use of ethically embedded artificial intelligence that respects Union values and the Charter

Amendment  16

 

Proposal for a regulation

Recital 5

 

Text proposed by the Commission

Amendment

(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 .

(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety, protection of fundamental rights, democracy and rule of law and the environment, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market, the putting into service and the use of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. These rules should be clear and robust in protecting fundamental rights, supportive of new innovative solutions, and enabling to a European ecosystem of public and private actors creating AI systems in line with Union values. By laying down those rules as well as measures in support of innovation with a particular focus on SMEs and start-ups, this Regulation supports the objective of promoting the AI made in Europe, of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33, and it ensures the protection of ethical principles, as specifically requested by the European Parliament34.

__________________

__________________

33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6.

33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6.

34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).

34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).

Amendment  17

 

Proposal for a regulation

Recital 5 a (new)

 

Text proposed by the Commission

Amendment

 

(5 a) Furthermore, in order to foster the development of AI systems in line with Union values, the Union needs to address the main gaps and barriers blocking the potential of the digital transformation including the shortage of digitally skilled workers, cybersecurity concerns, lack of investment and access to investment, and existing and potential gaps between large companies, SME’s and start-ups. Special attention should be paid to ensuring that the benefits of AI and innovation in new technologies are felt across all regions of the Union and that sufficient investment and resources are provided especially to those regions that may be lagging behind in some digital indicators.

Amendment  18

 

Proposal for a regulation

Recital 6

 

Text proposed by the Commission

Amendment

(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list.

(6) The notion of AI system in this Regulation should be clearly defined and closely aligned with the work of international organisations working on artificial intelligence to ensure legal certainty, harmonization and wide acceptance, while providing the flexibility to accommodate the rapid technological developments in this field. Moreover, it should be based on key characteristics of artificial intelligence, such as its learning, reasoning or modelling capabilities, so as to distinguish it from simpler software systems or programming approaches. AI systems are designed to operate with varying levels of autonomy, meaning that they have at least some degree of independence of actions from human controls and of capabilities to operate without human intervention. The term “machine-based” refers to the fact that AI systems run on machines. The reference to explicit or implicit objectives underscores that AI systems can operate according to explicit human-defined objectives or to implicit objectives. The objectives of the AI system may be different from the intended purpose of the AI system in a specific context. The reference to predictions includes content, which is considered in this Regulation a form of prediction as one of the possible outputs produced by an AI system. For the purposes of this Regulation, environments should be understood as the contexts in which the AI systems operate, whereas outputs generated by the AI system, meaning predictions, recommendations or decisions, respond to the objectives of the system, on the basis of inputs from said environment. Such output further influences said environment, even by merely introducing new information to it.

Amendment  19

 

Proposal for a regulation

Recital 6 a (new)

 

Text proposed by the Commission

Amendment

 

(6 a) AI systems often have machine learning capacities that allow them to adapt and perform new tasks autonomously. Machine learning refers to the computational process of optimizing the parameters of a model from data, which is a mathematical construct generating an output based on input data. Machine learning approaches include, for instance, supervised, unsupervised and reinforcement learning, using a variety of methods including deep learning with neural networks. This Regulation is aimed at addressing new potential risks that may arise by delegating control to AI systems, in particular to those AI systems that can evolve after deployment. The function and outputs of many of these AI systems are based on abstract mathematical relationships that are difficult for humans to understand, monitor and trace back to specific inputs. These complex and opaque characteristics  (black box element) impact accountability and explainability. Comparably simpler techniques such as knowledge-based approaches, Bayesian estimation or decision-trees may also lead to legal gaps that need to be addressed by this Regulation, in particular when they are used in combination with machine learning approaches in hybrid systems.

 

Amendment  20

 

Proposal for a regulation

Recital 6 b (new)

 

Text proposed by the Commission

Amendment

 

(6 b) AI systems can be used as stand-alone software system, integrated into a physical product (embedded), used to serve the functionality of a physical product without being integrated therein (non-embedded) or used as an AI component of a larger system. If this larger system would not function without the AI component in question, then the entire larger system should be considered as one single AI system under this Regulation.

Amendment  21

 

Proposal for a regulation

Recital 7

 

Text proposed by the Commission

Amendment

(7) The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council35 , Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council36 and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council37 .

(7) The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council35. Biometrics-based data are additional data resulting from specific technical processing relating to physical, physiological or behavioural signals of a natural person, such as facial expressions, movements, pulse frequency, voice, key strikes or gait, which may or may not allow or confirm the unique identification of a natural person.

__________________

__________________

35 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).

35 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).

36 Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39)

 

37 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (Law Enforcement Directive) (OJ L 119, 4.5.2016, p. 89).

 

Amendment  22

 

Proposal for a regulation

Recital 7 a (new)

 

Text proposed by the Commission

Amendment

 

(7 a)  The notion of biometric identification as used in this Regulation should be defined as the automated recognition of physical, physiological, behavioural, and psychological human features such as the face, eye movement, facial expressions, body shape, voice, speech, gait, posture, heart rate, blood pressure, odour, keystrokes, psychological reactions (anger, distress, grief, etc.) for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database (one-to-many identification), irrespective of whether the individual has given its consent or not.

Amendment  23

 

Proposal for a regulation

Recital 7 b (new)

 

Text proposed by the Commission

Amendment

 

(7 b) The notion of biometric categorisation as used in this Regulation should be defined as assigning natural persons to specific categories or inferring their characteristics and attributes such as gender, sex, age, hair colour, eye colour, tattoos, ethnic or social origin, health, mental or physical ability, behavioural or personality, traits language, religion, or membership of a national minority or sexual or political orientation on the basis of their biometric or biometric-based data, or which can be inferred from such data

Amendment  24

 

Proposal for a regulation

Recital 8

 

Text proposed by the Commission

Amendment

(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.

(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used, exlcuding verification systems which merely compare the biometric data of an individual to their previously provided biometric data (one-to-one). Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned. Given that the notion of biometric identification is independent from the individual’s consent, this definition applies even when warning notices are placed in the location that is under surveillance of the remote biometric identification system, and is not de facto annulled by pre-enrolment.

Amendment  25

 

Proposal for a regulation

Recital 8 a (new)

 

Text proposed by the Commission

Amendment

 

(8 a) The identification of natural persons at a distance is understood to distinguish remote biometric identification systems from close proximity individual verification systems using biometric identification means, whose sole purpose is to confirm whether or not a specific natural person presenting themselves for identification is permitted, such as in order to gain access to a service, a device, or premises.

Amendment  26

 

Proposal for a regulation

Recital 9

 

Text proposed by the Commission

Amendment

(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.

(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned and regardless of the potential capacity restrictions. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, sports grounds, schools, universities, relevant parts of hospitals and banks, amusement parks, festivals, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.

Amendment  27

 

Proposal for a regulation

Recital 9 a (new)

 

Text proposed by the Commission

Amendment

 

(9 a) It is important to note that AI systems should make best efforts to respect general principles establishing a high-level framework that promotes a coherent human-centric approach to ethical and trustworthy AI in line with the Charter of Fundamental Rights of the European Union and the values on which the Union is founded, including the protection of fundamental rights, human agency and oversight, technical robustness and safety, privacy and data governance, transparency, non-discrimination and fairness and societal and environmental wellbeing

Amendment  28

 

Proposal for a regulation

Recital 9 b (new)

 

Text proposed by the Commission

Amendment

 

(9 b) ‘AI literacy’ refers to skills, knowledge and understanding that allows providers, users and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause and thereby promote its democratic control. AI literacy should not be limited to learning about tools and technologies, but should also aim to equip providers and users with the notions and skills required to ensure compliance with and enforcement of this Regulation. It is therefore necessary that the Commission, the Member States as well as providers and users of AI systems, in cooperation with all relevant stakeholders, promote the development of a sufficient level of AI literacy, in all sectors of society, for people of all ages, including women and girls, and that progress in that regard is closely followed.

Amendment  29

 

Proposal for a regulation

Recital 10

 

Text proposed by the Commission

Amendment

(10) In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to users of AI systems established within the Union.

(10) In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union and on international level, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to deployers of AI systems established within the Union. In order for the Union to be true to its fundamental values, AI systems intended to be used for practices that are considered unacceptable by this Regulation, should equally be deemed to be unacceptable outside the Union because of their particularly harmful effect to fundamental rights as enshrined in the Charter. Therefore it is appropriate to prohibit the export of such AI systems to third countries by providers residing in the Union.

 

Amendment  30

 

Proposal for a regulation

Recital 11

 

Text proposed by the Commission

Amendment

(11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations.

(11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users deployers of AI systems that are established in a third country, to the extent the output produced by those systems is intended to be used in the Union. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations. This exception should nevertheless be limited to trusted countries and international organisation that share Union values.

Amendment  31

 

Proposal for a regulation

Recital 12

 

Text proposed by the Commission

Amendment

(12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].

(12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or deployer of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].

Amendment  32

 

Proposal for a regulation

Recital 12 a (new)

 

Text proposed by the Commission

Amendment

 

(12 a) Software and data that are openly shared and where users can freely access, use, modify and redistribute them or modified versions thereof, can contribute to research and innovation in the market. Research by the Commission also shows that free and open-source software can contribute between EUR 65 billion to EUR 95 billion to the European Union’s GDP and that it can provide significant growth opportunities for the European economy. Users are allowed to run, copy, distribute, study, change and improve software and data, including models by way of free and open-source licences. To foster the development and deployment of AI, especially by SMEs, start-ups, academic research but also by individuals, this Regulation should not apply to such free and open-source AI components except to the extent that they are placed on the market or put into service by a provider as part of a high-risk AI system or of an AI system that falls under Title II or IV of this Regulation.

Amendment  33

 

Proposal for a regulation

Recital 12 b (new)

 

Text proposed by the Commission

Amendment

 

(12 b) Neither the collaborative development of free and open-source AI components nor making them available on open repositories should constitute a placing on the market or putting into service. A commercial activity, within the understanding of making available on the market, might however be characterised by charging a price, with the exception of transactions between micro enterprises, for a free and open-source AI component but also by charging a price for technical support services, by providing a software platform through which the provider monetises other services, or by the use of personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the software.

Amendment  34

 

Proposal for a regulation

Recital 12 c (new)

 

Text proposed by the Commission

Amendment

 

(12 c) The developers of free and open-source AI components should not be mandated under this Regulation to comply with requirements targeting the AI value chain and, in particular, not towards the provider that has used that free and open-source AI component. Developers of free and open-source AI components should however be encouraged to implement widely adopted documentation practices, such as model and data cards, as a way to accelerate information sharing along the AI value chain, allowing the promotion of trustworthy AI systems in the Union.

Amendment  35

 

Proposal for a regulation

Recital 13

 

Text proposed by the Commission

Amendment

(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments.

(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights as well as democracy and rule of law and the environment, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter, the European Green Deal, the Joint Declaration on Digital Rights of the Union and the Ethics Guidelines for Trustworthy Artificial Intelligence (AI) of the High-Level Expert Group on Artificial Intelligence, and should be non-discriminatory and in line with the Union’s international trade commitments.

Amendment  36

 

Proposal for a regulation

Recital 14

 

Text proposed by the Commission

Amendment

(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.

(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain unacceptable artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems

Amendment  37

 

Proposal for a regulation

Recital 15

 

Text proposed by the Commission

Amendment

(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child.

(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child.

Amendment  38

 

Proposal for a regulation

Recital 16

 

Text proposed by the Commission

Amendment

(16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human-machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.

(16) The placing on the market, putting into service or use of certain AI systems with the objective to or the effect of materially distorting human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. This limitation should be understood to include neuro-technologies assisted by AI systems that are used to monitor, use, or influence neural data gathered through brain-computer interfaces insofar as they are materially distorting the behaviour of a natural person in a manner that causes or is likely to cause that person or another person significant harm. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of individuals and specific groups of persons due to their known or predicted personality traits, age, physical or mental incapacities, social or economic situation. They do so with the intention to or the effect of materially distorting the behaviour of a person and in a manner that causes or is likely to cause significant harm to that or another person or groups of persons, including harms that may be accumulated over time. The intention to distort the behaviour may not be presumed if the distortion results from factors external to the AI system which are outside of the control of the provider or the user, such as factors that may not be reasonably foreseen and mitigated by the provider or the deployer of the AI system. In any case, it is not necessary for the provider or the deployer to have the intention to cause the significant harm, as long as such harm results from the manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices is complementary to the provisions contained in Directive 2005/29/EC, according to which unfair commercial practices are prohibited, irrespective of whether they carried out having recourse to AI systems or otherwise. In such setting, lawful commercial practices, for example in the field of advertising, that are in compliance with Union law should not in themselves be regarded as violating prohibition. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human-machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research and on the basis of specific informed consent of the individuals that are exposed to them or, where applicable, of their legal guardian.

 

Amendment  39

 

Proposal for a regulation

Recital 16 a (new)

 

Text proposed by the Commission

Amendment

 

(16 a) AI systems that categorise natural persons by assigning them to specific categories, according to known or inferred sensitive or protected characteristics are particularly intrusive, violate human dignity and hold great risk of discrimination. Such characteristics include gender, gender identity, race, ethnic origin, migration or citizenship status, political orientation, sexual orientation, religion, disability or any other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights of the European Union, as well as under Article 9 of Regulation (EU)2016/769. Such systems should therefore be prohibited.

Amendment  40

 

Proposal for a regulation

Recital 17

 

Text proposed by the Commission

Amendment

(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.

(17) AI systems providing social scoring of natural persons for general purpose may lead to discriminatory outcomes and the exclusion of certain groups. They violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify natural persons or groups based on multiple data points and time occurrences related to their social behaviour in multiple contexts or known, inferred or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.

Amendment  41

 

Proposal for a regulation

Recital 18

 

Text proposed by the Commission

Amendment

(18) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.

(18) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces is particularly intrusive to the rights and freedoms of the concerned persons, and can ultimately affect the private life of a large part of the population, evoke a feeling of constant surveillance, give parties deploying biometric identification in publicly accessible spaces a position of uncontrollable power and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights at the core to the Rule of Law. Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities. The use of those systems in publicly accessible places should therefore be prohibited. Similarly, AI systems used for the analysis of recorded footage of publicly accessible spaces through ‘post’ remote biometric identification systems should also be prohibited, unless there is pre-judicial authorisation for use in the context of law enforcement, when strictly necessary for the targeted search connected to a specific serious criminal offense that already took place, and only subject to a pre-judicial authorisation.

Amendment  42

 

Proposal for a regulation

Recital 19

 

Text proposed by the Commission

Amendment

(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences.

deleted

__________________

 

38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).

 

Amendment  43

 

Proposal for a regulation

Recital 20

 

Text proposed by the Commission

Amendment

(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.

deleted

Amendment  44

 

Proposal for a regulation

Recital 21

 

Text proposed by the Commission

Amendment

(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.

deleted

Amendment  45

 

Proposal for a regulation

Recital 22

 

Text proposed by the Commission

Amendment

(22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation.

deleted

Amendment  46

 

Proposal for a regulation

Recital 23

 

Text proposed by the Commission

Amendment

(23) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to it.

deleted

Amendment  47

 

Proposal for a regulation

Recital 24

 

Text proposed by the Commission

Amendment

(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.

(24) Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces as regulated by this Regulation should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.

Amendment  48

 

Proposal for a regulation

Recital 25

 

Text proposed by the Commission

Amendment

(25) In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid down in Article 5(1), point (d), (2) and (3) of this Regulation adopted on the basis of Article 16 of the TFEU which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the forms of judicial cooperation in criminal matters or police cooperation which require compliance with the provisions laid down on the basis of Article 16 of the TFEU.

(25) In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid down in Article 5(1), point (d), of this Regulation adopted on the basis of Article 16 of the TFEU which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the forms of judicial cooperation in criminal matters or police cooperation which require compliance with the provisions laid down on the basis of Article 16 of the TFEU.

Amendment  49

 

Proposal for a regulation

Recital 26

 

Text proposed by the Commission

Amendment

(26) In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and TFEU, Denmark is not bound by rules laid down in Article 5(1), point (d), (2) and (3) of this Regulation adopted on the basis of Article 16 of the TFEU, or subject to their application, which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU.

(26) In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and TFEU, Denmark is not bound by rules laid down in Article 5(1), point (d) of this Regulation adopted on the basis of Article 16 of the TFEU, or subject to their application, which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU.

Amendment  50

 

Proposal for a regulation

Recital 26 a (new)

 

Text proposed by the Commission

Amendment

 

(26 a) AI systems used by law enforcement authorities or on their behalf to make predictions, profiles or risk assessments based on profiling of natural persons or data analysis based on personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of persons for the purpose of predicting the occurrence or reoccurrence of an actual or potential criminal offence(s) or other criminalised social behaviour or administrative offences, including fraud-predicition systems, hold a particular risk of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence. Such AI systems should therefore be prohibited.

Amendment  51

 

Proposal for a regulation

Recital 26 b (new)

 

Text proposed by the Commission

Amendment

 

(26 b) The indiscriminate and untargeted scraping of biometric data from social media or CCTV footage to create or expand facial recognition databases add to the feeling of mass surveillance and can lead to gross violations of fundamental rights, including the right to privacy. The use of AI systems with this intended purpose should therefore be prohibited.

Amendment  52

 

Proposal for a regulation

Recital 26 c (new)

 

Text proposed by the Commission

Amendment

 

(26 c) There are serious concerns about the scientific basis of AI systems aiming to detect emotions, physical or physiological features such as facial expressions, movements, pulse frequency or voice. Emotions or expressions of emotions and perceptions thereof vary considerably across cultures and situations, and even within a single individual. Among the key shortcomings of such technologies, are the limited reliability (emotion categories are neither reliably expressed through, nor unequivocally associated with, a common set of physical or physiological movements), the lack of specificity (physical or physiological expressions do not perfectly match emotion categories) and the limited generalisability (the effects of context and culture are not sufficiently considered). Reliability issues and consequently, major risks for abuse, may especially arise when deploying the system in real-life situations related to law enforcement, border management, workplace and education institutions. Therefore, the placing on the market, putting into service, or use of AI systems intended to be used in these contexts to detect the emotional state of individuals should be prohibited.

Amendment  53

 

Proposal for a regulation

Recital 26 d (new)

 

Text proposed by the Commission

Amendment

 

(26 d) Practices that are prohibited by Union legislation, including data protection law, non-discrimination law, consumer protection law, and competition law, should not be affected by this Regulation

Amendment  54

 

Proposal for a regulation

Recital 27

 

Text proposed by the Commission

Amendment

(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.

(27) High-risk AI systems should only be placed on the Union market, put into service or used if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law, including fundamental rights, democracy, the rule or law or the environment. In order to ensure alignment with sectoral legislation and avoid duplications, requirements for high-risk AI systems should take into account sectoral legislation laying down requirements for high-risk AI systems included in the scope of this Regulation, such as Regulation (EU) 2017/745 on Medical Devices and Regulation (EU) 2017/746 on In Vitro Diagnostic Devices or Directive 2006/42/EC on Machinery. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any. Given the rapid pace of technological development, as well as the potential changes in the use of AI systems, the list of high-risk areas and use-cases in Annex III should nonetheless be subject to permanent review through the exercise of regular assessment.

Amendment  55

 

Proposal for a regulation

Recital 28

 

Text proposed by the Commission

Amendment

(28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons.

(28) AI systems could have an adverse impact to health and safety of persons, in particular when such systems operate as safety components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate.

Amendment  56

 

Proposal for a regulation

Recital 28 a (new)

 

Text proposed by the Commission

Amendment

 

(28 a) The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, right to education consumer protection, workers’ rights, rights of persons with disabilities, gender equality, intellectual property rights, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons or to the environment.

Amendment  57

 

Proposal for a regulation

Recital 29

 

Text proposed by the Commission

Amendment

(29) As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council39 , Regulation (EU) No 167/2013 of the European Parliament and of the Council40 , Regulation (EU) No 168/2013 of the European Parliament and of the Council41 , Directive 2014/90/EU of the European Parliament and of the Council42 , Directive (EU) 2016/797 of the European Parliament and of the Council43 , Regulation (EU) 2018/858 of the European Parliament and of the Council44 , Regulation (EU) 2018/1139 of the European Parliament and of the Council45 , and Regulation (EU) 2019/2144 of the European Parliament and of the Council46 , it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant future delegated or implementing acts on the basis of those acts.

(29) As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council39 , Regulation (EU) No 167/2013 of the European Parliament and of the Council40 , Regulation (EU) No 168/2013 of the European Parliament and of the Council41 , Directive 2014/90/EU of the European Parliament and of the Council42 , Directive (EU) 2016/797 of the European Parliament and of the Council43 , Regulation (EU) 2018/858 of the European Parliament and of the Council44 , Regulation (EU) 2018/1139 of the European Parliament and of the Council45 , and Regulation (EU) 2019/2144 of the European Parliament and of the Council46 , it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment, market surveillance and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant future delegated or implementing acts on the basis of those acts.

__________________

__________________

39 Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72).

39 Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72).

40 Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1).

40 Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1).

41 Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52).

41 Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52).

42 Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146).

42 Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146).

43 Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44).

43 Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44).

44 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).

44 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).

45 Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1).

45 Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1).

46 Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1).

46 Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1).

Amendment  58

 

Proposal for a regulation

Recital 30

 

Text proposed by the Commission

Amendment

(30) As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation, it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.

(30) As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation law listed in Annex II, it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure in order to ensure compliance with essential safety requirements with a third-party conformity assessment body pursuant to that relevant Union harmonisation law. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.

 

Amendment  59

 

Proposal for a regulation

Recital 31

 

Text proposed by the Commission

Amendment

(31) The classification of an AI system as high-risk pursuant to this Regulation should not necessarily mean that the product whose safety component is the AI system, or the AI system itself as a product, is considered ‘high-risk’ under the criteria established in the relevant Union harmonisation legislation that applies to the product. This is notably the case for Regulation (EU) 2017/745 of the European Parliament and of the Council47 and Regulation (EU) 2017/746 of the European Parliament and of the Council48 , where a third-party conformity assessment is provided for medium-risk and high-risk products.

(31) The classification of an AI system as high-risk pursuant to this Regulation should not mean that the product whose safety component is the AI system, or the AI system itself as a product, is considered ‘high-risk’ under the criteria established in the relevant Union harmonisation law that applies to the product. This is notably the case for Regulation (EU) 2017/745 of the European Parliament and of the Council47 and Regulation (EU) 2017/746 of the European Parliament and of the Council48 , where a third-party conformity assessment is provided for medium-risk and high-risk products.

__________________

__________________

47 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1).

47 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1).

48 Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).

48 Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).

Amendment  60

 

Proposal for a regulation

Recital 32

 

Text proposed by the Commission

Amendment

(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.

(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products and that are listed in one of the areas and use cases in Annex III, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a significant risk of harm to the health and safety or the fundamental rights of persons and, where the AI system is used as a safety component of a critical infrastructure, to the environment . Such significant risk of harm should be identified by assessing on the one hand the effect of such risk with respect to its level of severity, intensity, probability of occurrence and duration combined altogether and on the other hand whether the risk can affect an individual, a plurality of persons or a particular group of persons. Such combination could for instance result in a high severity but low probability to affect a natural person, or a high probability to affect a group of persons with a low intensity over a long period of time, depending on the context. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.

Amendment  61

 

Proposal for a regulation

Recital 32 a (new)

 

Text proposed by the Commission

Amendment

 

(32 a) Providers whose AI systems fall under one of the areas and use cases listed in Annex III that consider their system does not pose a significant risk of harm to the health, safety, fundamental rights or the environment should inform the national supervisory authorities by submitting a reasoned notification. This could take the form of a one-page summary of the relevant information on the AI system in question, including its intended purpose and why it would not pose a significant risk of harm to the health, safety, fundamental rights or the environment. The Commission should specify criteria to enable companies to assess whether their system would pose such risks, as well as develop an easy to use and standardised template for the notification. Providers should submit the notification as early as possible and in any case prior to the placing of the AI system on the market or its putting into service, ideally at the development stage, and they should be free to place it on the market at any given time after the notification. However, if the authority estimates the AI system in question was misclassified, it should object to the notification within a period of three months. The objection should be substantiated and duly explain why the AI system has been misclassified. The provider should retain the right to appeal by providing further arguments. If after the three months there has been no objection to the notification, national supervisory authorities could still intervene if the AI system presents a risk at national level, as for any other AI system on the market. National supervisory authorities should submit annual reports to the AI Office detailing the notifications received and the decisions taken.

Amendment  62

 

Proposal for a regulation

Recital 33

 

Text proposed by the Commission

Amendment

(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight.

deleted

 

Amendment  63

 

Proposal for a regulation

Recital 33 a (new)

 

Text proposed by the Commission

Amendment

 

(33 a) As biometric data constitute a special category of sensitive personal data in accordance with Regulation 2016/679, it is appropriate to classify as high-risk several critical use-cases of biometric and biometrics-based systems. AI systems intended to be used for biometric identification of natural persons and AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those which are prohibited under this Regulation should therefore be classified as high-risk. This should not include AI systems intended to be used for biometric verification, which includes authentication, whose sole purpose is to confirm that a specific natural person is the person he or she claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises (one-to-one verification). Biometric and biometrics-based systems which are provided for under Union law to enable cybersecurity and personal data protection measures should not be considered as posing a significant risk of harm to the health, safety and fundamental rights.

 

Amendment  64

 

Proposal for a regulation

Recital 34

 

Text proposed by the Commission

Amendment

(34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities.

(34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of the supply of water, gas, heating electricity and critical digital infrastructure, since their failure or malfunctioning may infringe the security and integrity of such critical infrastructure or put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities. Safety components of critical infrastructure, including critical digital infrastructure, are systems used to directly protect the physical integrity of physical infrastructure or health and safety of persons and property. Failure or malfunctioning of such components might directly lead to risks to the physical integrity of critical infrastructure and thus to risks to the health and safety of persons and property. Components intended to be used solely for cybersecurity purposes should not qualify as safety components. Examples of such safety components may include systems for monitoring water pressure or fire alarm controlling systems in cloud computing centres.

Amendment  65

 

Proposal for a regulation

Recital 35

 

Text proposed by the Commission

Amendment

(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination.

(35) Deployment of AI systems in education is important in order to help modernise entire education systems, to increase educational quality, both offline and online and to accelerate digital education, thus also making it available to a broader audience . AI systems used in education or vocational training, notably for determining access or materially influence decisions on admission or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education or to assess the appropriate level of education for an individual and materially influence the level of education and training that individuals will receive or be able to access or to monitor and detect prohibited behaviour of students during tests should be classified as high-risk AI systems, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems can be particularly intrusive and may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation.

Amendment  66

 

Proposal for a regulation

Recital 36

 

Text proposed by the Commission

Amendment

(36) AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.

(36) AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions or materially influence decisions on initiation, promotion and termination and for personalised task allocation based on individual behaviour, personal traits or biometric data, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects, livelihoods of these persons and workers’ rights. Relevant work-related contractual relationships should meaningfully involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also undermine the essence of their fundamental impact their rights to data protection and privacy. This Regulation applies without prejudice to Union and Member State competences to provide for more specific rules for the use of AI-systems in the employment context.

Amendment  67

 

Proposal for a regulation

Recital 37

 

Text proposed by the Commission

Amendment

(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property.

(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services, including healthcare services, and essential services, including but not limited to housing, electricity, heating/cooling and internet, and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, gender, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. However, AI systems provided for by Union law  for the purpose of detecting fraud in the offering of financial services should not be considered as high-risk under this Regulation. Natural persons applying for or receiving public assistance benefits and services from public authorities, including healthcare services and essential services, including but not limited to housing, electricity, heating/cooling and internet, are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. Similarly, AI systems intended to be used to make decisions or materially influence decisions on the eligibility of natural persons for health and life insurance may also have a significant impact on persons’ livelihood and may infringe their fundamental rights such as by limiting access to healthcare or by perpetuating discrimination based on personal characteristics. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to evaluate and classify emergency calls by natural persons or to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property.

 

Amendment  68

 

Proposal for a regulation

Recital 37 a (new)

 

Text proposed by the Commission

Amendment

 

(37 a) Given the role and responsibility of police and judicial authorities, and the impact of decisions they take for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, some specific use-cases of AI applications in law enforcement has to be classified as high-risk, in particular in instances where there is the potential to significantly affect the lives or the fundamental rights of individuals.

Amendment  69

 

Proposal for a regulation

Recital 38

 

Text proposed by the Commission

Amendment

(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.

(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its performance, its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by or on behalf of law enforcement authorities or by Union agencies, offices or bodies in support of law enforcement authorities, as polygraphs and similar tools insofar as their use is permitted under relevant Union and national law, for the evaluation of the reliability of evidence in criminal proceedings, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be classified as high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences. The use of AI tools by law enforcement and judicial authorities should not become a factor of inequality, social fracture or exclusion. The impact of the use of AI tools on the defence rights of suspects should not be ignored, notably the difficulty in obtaining meaningful information on their functioning and the consequent difficulty in challenging their results in court, in particular by individuals under investigation.

Amendment  70

 

Proposal for a regulation

Recital 39

 

Text proposed by the Commission

Amendment

(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non-discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation.

(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non-discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools insofar as their use is permitted under relevant Union and national law, for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination and assessment of the veracity of evidence in relation to applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status; for monitoring, surveilling or processing personal data in the context of border management activities, for the purpose of detecting, recognising or identifying natural persons; for the forecasting or prediction of trends related to migration movements and border crossings. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. The use of AI systems in migration, asylum and border control management should in no circumstances be used by Member States or Union institutions, agencies or bodies as a means to circumvent their international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January 1967, nor should they be used to in any way infringe on the principle of non-refoulement, or or deny safe and effective legal avenues into the territory of the Union, including the right to international protection.

__________________

__________________

49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60).

49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60).

50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).

50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).

Amendment  71

 

Proposal for a regulation

Recital 40

 

Text proposed by the Commission

Amendment

(40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources.

(40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to be used by a judicial authority or administrative body or on their behalf to assist judicial authorities or administrative bodies in researching and interpreting facts and the law and in applying the law to a concrete set of facts or used in a similar way in alternative dispute resolution. The use of artificial intelligence tools can support, but should replace the decision-making power of judges or judicial independence, as the final decision-making must remain a human-driven activity and decision. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources.

Amendment  72

 

Proposal for a regulation

Recital 40 a (new)

 

Text proposed by the Commission

Amendment

 

(40 a) In order to address the risks of undue external interference to the right to vote enshrined in Article 39 of the Charter, and of disproportionate effects on democratic processes, democracy, and the rule of law, AI systems intended to be used to influence the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda should be classified as high-risk AI systems. with the exception of AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistical point of view.

Amendment  73

 

Proposal for a regulation

Recital 40 b (new)

 

Text proposed by the Commission

Amendment

 

(40 b) Considering the scale of natural persons using the services provided by social media platforms designated as very large online platforms, such online platforms can be used in a way that strongly influences safety online, the shaping of public opinion and discourse, election and democratic processes and societal concerns. It is therefore appropriate that AI systems used by those online platforms in their recommender systems are subject to this Regulation so as to ensure that the AI systems comply with the requirements laid down under this Regulation, including the technical requirements on data governance, technical documentation and traceability, transparency, human oversight, accuracy and robustness. Compliance with this Regulation should enable such very large online platforms to comply with their broader risk assessment and risk-mitigation obligations in Article 34 and 35 of Regulation EU 2022/2065. The obligations in this Regulation are without prejudice to Regulation (EU) 2022/2065 and should complement the obligations required under the Regulation (EU) 2022/2065 when the social media platform has been designated as a very large online platform. Given the European-wide impact of social media platforms designated as very large online platforms, the authorities designated under Regulation (EU) 2022/2065 should act as enforcement authorities for the purposes of enforcing this provision.

Amendment  74

 

Proposal for a regulation

Recital 41

 

Text proposed by the Commission

Amendment

(41) The fact that an AI system is classified as high risk under this Regulation should not be interpreted as indicating that the use of the system is necessarily lawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should not be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant.

(41) The fact that an AI system is classified as a high risk AI system under this Regulation should not be interpreted as indicating that the use of the system is necessarily lawful or unlawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law.

Amendment  75

 

Proposal for a regulation

Recital 41 a (new)

 

Text proposed by the Commission

Amendment

 

(41 a) A number of legally binding rules at European, national and international level already apply or are relevant to AI systems today, including but not limited to EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the General Data Protection Regulation, the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination Directives, consumer law and Safety and Health at Work Directives), the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and  national law. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications (such as for instance the Medical Device Regulation in the healthcare sector).

Amendment  76

 

Proposal for a regulation

Recital 42

 

Text proposed by the Commission

Amendment

(42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for users and affected persons, certain mandatory requirements should apply, taking into account the intended purpose of the use of the system and according to the risk management system to be established by the provider.

(42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for deployers and affected persons, certain mandatory requirements should apply, taking into account the intended purpose, the reasonably foreseeable misuse of the system and according to the risk management system to be established by the provider. These requirements should be objective-driven, fit for purpose, reasonable and effective, without adding undue regulatory burdens or costs on operators.

Amendment  77

 

Proposal for a regulation

Recital 43

 

Text proposed by the Commission

Amendment

(43) Requirements should apply to high-risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.

(43) Requirements should apply to high-risk AI systems as regards the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to deployers, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as well as the environment, democracy and rule of law, as applicable in the light of the intended purpose or reasonably foreseeable misuse of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.

Amendment  78

 

Proposal for a regulation

Recital 44

 

Text proposed by the Commission

Amendment

(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers shouldbe able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high-risk AI systems.

(44) Access to data of high quality plays a vital role in providing structure and in ensuring the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become a source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, and where applicable, validation and testing data sets, including the labels, should be sufficiently relevant, representative, appropriately vetted for errors and as complete as possible in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used, with specific attention to the mitigation of possible biases in the datasets, that might lead to risks to fundamental rights or discriminatory outcomes for the persons affected by the high-risk AI system. Biases can for example be inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in real world settings. Results provided by AI systems  are influenced by such inherent biases that are inclined to gradually increase and thereby perpetuate and amplify existing discrimination, in particular for persons belonging to certain vulnerable or ethnic groups, or racialised communities. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, contextal, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should, exceptionally and following the application of all applicable conditions laid down under this Regulation and in Regulation (EU) 2016/679, Directive (EU) 2016/680 and Regulation (EU) 2018/1725, be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the negative bias detection and correction in relation to high-risk AI systems. Negative bias should be understood as bias that create direct or indirect discriminatory effect against a natural person The requirements related to data governance can be complied with by having recourse to third-parties that offer certified compliance services including verification of data governance, data set integrity, and data training, validation and testing practices.

Amendment  79

 

Proposal for a regulation

Recital 45

 

Text proposed by the Commission

Amendment

(45) For the development of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as digital innovation hubs, testing experimentation facilities and researchers, should be able to access and use high quality datasets within their respective fields of activities which are related to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non-discriminatory access to high quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non-discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems.

(45) For the development and assessment of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as digital innovation hubs, testing experimentation facilities and researchers, should be able to access and use high quality datasets within their respective fields of activities which are related to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non-discriminatory access to high quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non-discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems.

Amendment  80

 

Proposal for a regulation

Recital 45 a (new)

 

Text proposed by the Commission

Amendment

 

(45 a) The right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI system. In this regard, the principles of data minimisation and data protection by design and by default, as set out in Union data protection law, are essential when the processing of data involves significant risks to the fundamental rights of individuals. Providers and users of AI systems should implement state-of-the-art technical and organisational measures in order to protect those rights. Such measures should include not only anonymisation and encryption, but also the use of increasingly available technology that permits algorithms to be brought to the data and allows valuable insights to be derived without the transmission between parties or unnecessary copying of the raw or structured data themselves

Amendment  81

 

Proposal for a regulation

Recital 46

 

Text proposed by the Commission

Amendment

(46) Having information on how high-risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date.

(46) Having comprehensible information on how high-risk AI systems have been developed and how they perform throughout their lifetime is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date appropriately throughout the lifecycle of the AI system. AI systems can have a large important environmental impact and high energy consumption during their lifecyle. In order to better apprehend the impact of AI systems on the environment, the technical documentation drafted by providers should include information on the energy consumption of the AI system, including the consumption during development and expected consumption during use. Such information should take into account the relevant Union and national legislation. This reported information should be comprehensible, comparable and verifiable and to that end, the Commission should develop guidelines on a harmonised metholodogy for calculation and reporting of this information. To ensure that a single documentation is possible, terms and definitions related to the required documentation and any required documentation in the relevant Union legislation should be aligned as much as possible.

Amendment  82

 

Proposal for a regulation

Recital 46 a (new)

 

Text proposed by the Commission

Amendment

 

(46 a) AI systems should take into account state-of-the art methods and relevant applicable standards to reduce the energy use, resource use and waste, as well as to increase their energy efficiency and the overall efficiency of the system. The environmental aspects of AI systems that are significant for the purposes of this Regulation are the energy consumption of the AI system in the development, training and deployment phase as well as the recording and reporting and storing of this data. The design of AI systems should enable the measurement and logging of the consumption of energy and resources at each stage of development, training and deployment. The monitoring and reporting of the emissions of AI systems must be robust, transparent, consistent and accurate. In order to ensure the uniform application of this Regulation and stable legal ecosystem for providers and deployers in the Single Market, the Commission should develop a common specification for the methodology to fulfil the reporting and documentation requirement on the consumption of energy and resources during development, training and deployment. Such common specifications on measurement methodology can develop a baseline upon which the Commission can better decide if future regulatory interventions are needed, upon conducting an impact assessment that takes into account existing law.

Amendment  83

 

Proposal for a regulation

Recital 46 b (new)

 

Text proposed by the Commission

Amendment

 

(46 b) In order to achieve the objectives of this Regulation, and contribute to the Union’s environmental objectives while ensuring the smooth functioning of the internal market, it may be necessary to establish recommendations and guidelines and, eventually, targets for sustainability. For that purpose the Commission is entitled to develop a methodology to contribute towards having Key Performance Indicators (KPIs) and a reference for the Sustainable Development Goals (SDGs). The goal should be in the first instance to enable fair comparison between AI implementation choices providing incentives to promote using more efficient AI technologies addressing energy and resource concerns. To meet this objective this Regulation should provide the means to establish a baseline collection of data reported on the emissions from development and training and for deployment;

Amendment  84

 

Proposal for a regulation

Recital 47 a (new)

 

Text proposed by the Commission

Amendment

 

(47a) Such requirements on transparency and on the explicability of AI decision-making should also help to counter the deterrent effects of digital asymmetry and so-called ‘dark patterns’ targeting individuals and their informed consent.

Amendment  85

 

Proposal for a regulation

Recital 49

 

Text proposed by the Commission

Amendment

(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users.

(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. Performance metrics and their expected level should be defined with the primary objective to mitigate risks and negative impact of the AI system. The expected level of performance metrics should be communicated in a clear, transparent, easily understandable and intelligible way to the deployers. The declaration of performance metrics cannot be considered proof of future levels, but relevant methods need to be applied to ensure consistent levels during use While standardisation organisations exist to establish standards, coordination on benchmarking is needed to establish how these standardised requirements and characteristics of AI systems should be measured. The European Artificial Intelligence Office should bring together national and international metrology and benchmarking authorities and provide non-binding guidance to address the technical aspects of how to measure the appropriate levels of performance and robustness.

Amendment  86

 

Proposal for a regulation

Recital 50

 

Text proposed by the Commission

Amendment

(50) The technical robustness is a key requirement for high-risk AI systems. They should be resilient against risks connected to the limitations of the system (e.g. errors, faults, inconsistencies, unexpected situations) as well as against malicious actions that may compromise the security of the AI system and result in harmful or otherwise undesirable behaviour. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system.

(50) The technical robustness is a key requirement for high-risk AI systems. They should be resilient against risks connected to the limitations of the system (e.g. errors, faults, inconsistencies, unexpected situations) as well as against malicious actions that may compromise the security of the AI system and result in harmful or otherwise undesirable behaviour. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system. Users of the AI system should take steps to ensure that the possible trade-off between robustness and accuracy does not lead to discriminatory or negative outcomes for minority subgroups.

Amendment  87

 

Proposal for a regulation

Recital 51

 

Text proposed by the Commission

Amendment

(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.

(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks or confidentiality attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities, also taking into account as appropriate the underlying ICT infrastructure. High-risk AI should be accompanied by security solutions and patches for the lifetime of the product, or in case of the absence of dependence on a specific product, for a time that needs to be stated by the manufacturer.

Amendment  88

 

Proposal for a regulation

Recital 53 a (new)

 

Text proposed by the Commission

Amendment

 

(53 a) As signatories to the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), the Union and the  Member States are legally obliged to protect persons with disabilities from discrilmination and promote their equality, to ensure that persons with disabilities have access, on an equal basis wirh others, to information and communications technologies and systems, and to ensure respect for privacy for persons with disabilities. Given the growing importance and use of AI systems, the application of universal design principles to all new technologies and services should ensure full, equal, and unrestricted access for everyone potentially affected by or using AI technologies, including persons with disabilities, in a way that takes full account of their inherent dignity and diversity. It is therefore essential that Providers ensure full compliance with accessibility requirements, including Directive (EU) 2016/2102 and Directive (EU) 2019/882. Providers should ensure compliance with these requirements by design. Therefore, the necessary measures should be integrated as much as possible into the design of the high-risk AI system.

Amendment  89

 

Proposal for a regulation

Recital 54

 

Text proposed by the Commission

Amendment

(54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post-market monitoring system. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.

(54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post-market monitoring system. For providers that have already in place quality management systems based on standards such as ISO 9001 or other relevant standards, no duplicative quality management system in full should be expected but rather an adaptation of their existing systems to certain aspects linked to compliance with specific requirements of this Regulation. This should also be reflected in future standardization activities or guidance adopted by the Commission in this respect. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.

Amendment  90

 

Proposal for a regulation

Recital 56

 

Text proposed by the Commission

Amendment

(56) To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative established in the Union.

(56) To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, providers established outside the Union shall, by written mandate, appoint an authorised representative established in the Union.

Amendment  91

 

Proposal for a regulation

Recital 58

 

Text proposed by the Commission

Amendment

(58) Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regard the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for users. Users should in particular use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate.

(58) Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regards the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for deployers. Deployers should in particular use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate.

Amendment  92

 

Proposal for a regulation

Recital 58 a (new)

 

Text proposed by the Commission

Amendment

 

(58 a) Whilst risks related to AI systems can result from the way such systems are designed, risks can as well stem from how such AI systems are used. Deployers of high-risk AI system therefore play a critical role in ensuring that fundamental rights are protected, complementing the obligations of the provider when developing the AI system. Deployers are best placed to understand how the high-risk AI system will be used concretely and can therefore identify potential significant risks that were not foreseen in the development phase, due to a more precise knowledge of the context of use, the people or groups of people likely to be affected, including marginalised and vulnerable groups. Deployers should identify appropriate governance structures in that specific context of use, such as arrangements for human oversight, complaint-handling procedures and redress procedures, because choices in the governance structures can be instrumental in mitigating risks to fundamental rights in concrete use-cases. In order to efficiently ensure that fundamental rights are protected, the deployer of high-risk AI systems should therefore carry out a fundamental rights impact assessment prior to putting it into use. The impact assessment should be accompanied by a detailed plan describing the measures or tools that will help mitigating the risks to fundamental rights identified at the latest from the time of putting it into use. If such plan cannot be identified, the deployer should refrain from putting the system into use. When performing this impact assessment, the deployer should notify the national supervisory authority and, to the best extent possible relevant stakeholders as well as representatives of groups of persons likely to be affected by the AI system in order to collect relevant information which is deemed necessary to perform the impact assessment and are encouraged to make the summary of their fundamental rights impact assessment publicly available on their online website. This obligations should not apply to SMEs which, given the lack of resrouces, might find it difficult to perform such consultation. Nevertheless, they should also strive to invole such representatives when carrying out their fundamental rights impact assessment.In addition, given the potential impact and the need for democratic oversight and scrutiny, deployers of high-risk AI systems that are public authorities or Union institutions, bodies, offices and agencies, as well deployers who are undertakings designated as a gatekeeper under Regulation (EU) 2022/1925 should be required to register the use of any high-risk AI system in a public database. Other deployers may voluntarily register.

Amendment  93

 

Proposal for a regulation

Recital 59

 

Text proposed by the Commission

Amendment

(59) It is appropriate to envisage that the user of the AI system should be the natural or legal person, public authority, agency or other body under whose authority the AI system is operated except where the use is made in the course of a personal non-professional activity.

(59) It is appropriate to envisage that the deployer of the AI system should be the natural or legal person, public authority, agency or other body under whose authority the AI system is operated except where the use is made in the course of a personal non-professional activity.

Amendment  94

 

Proposal for a regulation

Recital 60

 

Text proposed by the Commission

Amendment

(60) In the light of the complexity of the artificial intelligence value chain, relevant third parties, notably the ones involved in the sale and the supply of software, software tools and components, pre-trained models and data, or providers of network services, should cooperate, as appropriate, with providers and users to enable their compliance with the obligations under this Regulation and with competent authorities established under this Regulation.

(60) Within the AI value chain multiple entities often supply tools and services but also components or processes that are then incorporated by the provider into the AI system, including in relation to data collection and pre-processing, model training, model retraining, model testing and evaluation, integration into software, or other aspects of model development. The involved entities may make their offering commercially available directly or indirectly, through interfaces, such as Application Programming Interfaces (API), and distributed under free and open source licenses but also more and more by AI workforce platforms, trained parameters resale, DIY kits to build models or the offering of paying access to a model serving architecture to develop and train models. In the light of this complexity of the AI value chain, all relevant third parties, in particular those that are involved in the development, sale and the commercial supply of software tools, components, pre-trained models or data incorporated into the AI system, or providers of network services, should without compromising their own intellectual property rights or trade secrets, make available the required information, training or expertise and cooperate, as appropriate, with providers to enable their control over all compliance relevant aspects of the AI system that falls under this Regulation. To allow a cost-effective AI value chain governance, the level of control shall be explicitly disclosed by each third party that supplies the provider with a tool, service, component or process that is later incorporated by the provider into the AI system.

Amendment  95

 

Proposal for a regulation

Recital 60 a (new)

 

Text proposed by the Commission

Amendment

 

(60 a) Where one party is in a stronger bargaining position, there is a risk that that party could leverage such position to the detriment of the other contracting party when negotiating the supply of tools, services, components or processes that are used or integrated in a high risk AI system or the remedies for the breach or the termination of related obligations. Such contractual imbalances particularly harm micro, small and medium-sized enterprises as well as start-ups, unless they are owned or sub-contracted by an enterprise which is able to compensate the sub-contractor appropriately, as they are without a meaningful ability to negotiate the conditions of the contractual agreement, and may have no other choice than to accept ‘take-it-or-leave-it’ contractual terms. Therefore, unfair contract terms regulating the supply of tools, services, components or processes that are used or integrated in a high risk AI system or the remedies for the breach or the termination of related obligations should not be binding to such micro, small or medium-sized enterprises and start-ups when they have been unilaterally imposed on them.

Amendment  96

 

Proposal for a regulation

Recital 60 b (new)

 

Text proposed by the Commission

Amendment

 

(60 b) Rules on contractual terms should take into account the principle of contractual freedom as an essential concept in business-to-business relationships. Therefore, not all contractual terms should be subject to an unfairness test, but only to those terms that are unilaterally imposed on micro, small and medium-sized enterprises and start-ups. This concerns ‘take-it-or-leave-it’ situations where one party supplies a certain contractual term and the micro, small or medium-sized enterprise and start-up cannot influence the content of that term despite an attempt to negotiate it. A contractual term that is simply provided by one party and accepted by the micro, small, medium-sized enterprise or a start-up or a term that is negotiated and subsequently agreed in an amended way between contracting parties should not be considered as unilaterally imposed.

Amendment  97

 

Proposal for a regulation

Recital 60 c (new)

 

Text proposed by the Commission

Amendment

 

(60 c) Furthermore, the rules on unfair contractual terms should only apply to those elements of a contract that are related to supply of tools, services, components or processes that are used or integrated in a high risk AI system or the remedies for the breach or the termination of related obligations. Other parts of the same contract, unrelated to these elements, should not be subject to the unfairness test laid down in this Regulation.

Amendment  98

 

Proposal for a regulation

Recital 60 d (new)

 

Text proposed by the Commission

Amendment

 

(60 d) Criteria to identify unfair contractual terms should be applied only to excessive contractual terms, where a stronger bargaining position is abused. The vast majority of contractual terms that are commercially more favourable to one party than to the other, including those that are normal in business-to-business contracts, are a normal expression of the principle of contractual freedom and continue to apply. If a contractual term is not included in the list of terms that are always considered unfair, the general unfairness provision applies. In this regard, the terms listed as unfair terms should serve as a yardstick to interpret the general unfairness provision.

Amendment  99

 

Proposal for a regulation

Recital 60 e (new)

 

Text proposed by the Commission

Amendment

 

(60 e) Foundation models are a recent development, in which AI models are developed from algorithms designed to optimize for generality and versatility of output. Those models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained. The foundation model can be unimodal or multimodal, trained through various methods such as supervised learning or reinforced learning. AI systems with specific intended purpose or general purpose AI systems can be an implementation of a foundation model, which means that each foundation model can be reused in countless downstream AI or general purpose AI systems. These models hold growing importance to many downstream applications and systems.

Amendment  100

 

Proposal for a regulation

Recital 60 f (new)

 

Text proposed by the Commission

Amendment

 

(60 f) In the case of foundation models provided as a service such as through API access, the cooperation with downstream providers should extend throughout the time during which that service is provided and supported, in order to enable appropriate risk mitigation, unless the provider of the foundation model transfers the training model as well as extensive and appropriate information on the datasets and the development process of the system or restricts the service, such as the API access, in such a way that the downstream provider is able to fully comply with this Regulation without further support from the original provider of the foundation model.

Amendment  101

 

Proposal for a regulation

Recital 60 g (new)

 

Text proposed by the Commission

Amendment

 

(60 g) In light of the nature and complexity of the value chain for AI system, it is essential to clarify the role of actors contributing to the development of AI systems. There is significant uncertainty as to the way foundation models will evolve, both in terms of typology of models and in terms of self-governance. Therefore, it is essential to clarify the legal situation of providers of foundation models. Combined with their complexity and unexpected impact, the downstream AI provider’s lack of control over the foundation model’s development and the consequent power imbalance and in order to ensure a fair sharing of responsibilities along the AI value chain, such models should be subject to proportionate and more specific requirements and obligations under this Regulation, namely foundation models should assess and mitigate possible risks and harms through appropriate design, testing and analysis, should implement data governance measures, including assessment of biases, and should comply with technical design requirements to ensure appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity and should comply with environmental standards. These obligations should be accompanied by standards. Also, foundation models should have information obligations and prepare all necessary technical documentation for potential downstream providers to be able to comply with their obligations under this Regulation. Generative foundation models should ensure transparency about the fact the content is generated by an AI system, not by humans. These specific requirements and obligations do not amount to considering foundation models as high risk AI systems, but should guarantee that the objectives of this Regulation to ensure a high level of protection of fundamental rights, health and safety, environment, democracy and rule of law are achieved. Pre-trained models developed for a narrower, less general, more limited set of applications that cannot be adapted for a wide range of tasks such as simple multi-purpose AI systems should not be considered foundation models for the purposes of this Regulation, because of their greater interpretability which makes their behaviour less unpredictable.

Amendment  102

 

Proposal for a regulation

Recital 60 h (new)

 

Text proposed by the Commission

Amendment

 

(60 h) Given the nature of foundation models, expertise in conformity assessment is lacking and third-party auditing methods are still under development . The sector itself is therefore developing new ways to assess fundamental models that fulfil in part the objective of auditing (such as model evaluation, red-teaming or machine learning verification and validation techniques). Those internal assessments for foundation models should be should be broadly applicable (e.g. independent of distribution channels, modality, development methods), to address risks specific to such models taking into account industry state-of-the-art practices and focus on developing sufficient technical understanding and control over the model, the management of reasonably foreseeable risks, and extensive analysis and testing of the model through appropriate measures, such as by the involvement of independent evaluators. As foundation models are a new and fast-evolving development in the field of artificial intelligence, it is appropriate for the Commission and the AI Office to monitor and periodically asses the legislative and governance framework of such models and in particular of generative AI systems based on such models, which raise significant questions related to the generation of content in breach of Union law, copyright rules, and potential misuse. It should be clarified that this Regulation should be without prejudice to Union law on copyright and related rights, including Directives 2001/29/EC, 2004/48/ECR and (EU) 2019/790 of the European Parliament and of the Council.

Amendment  103

 

Proposal for a regulation

Recital 61

 

Text proposed by the Commission

Amendment

(61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council54 should be a means for providers to demonstrate conformity with the requirements of this Regulation. However, the Commission could adopt common technical specifications in areas where no harmonised standards exist or where they are insufficient.

(61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council[1] should be a means for providers to demonstrate conformity with the requirements of this Regulation. To ensure the effectiveness of standards as policy tool for the Union and considering the importance of standards for ensuring conformity with the requirements of this Regulation and for the competitiveness of undertakings, it is necessary to ensure a balanced representation of interests by involving all relevant stakeholders in the development of standards. The standardisation process should be transparent in terms of legal and natural persons participating in the standardisation activities.

__________________

__________________

54 Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12).

54 Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12).

Amendment  104

 

Proposal for a regulation

Recital 61 a (new)

 

Text proposed by the Commission

Amendment

 

(61 a) In order to facilitate compliance, the first standardisation requests should be issued by the Commission two months after the entry into force of this Regulation at the latest. This should serve to improve legal certainty, thereby promoting investment and innovation in AI, as well as competitiveness and growth of the Union market, while enhancing multistakeholder governance representing all relevant European stakeholders such as the AI Office, European standardisation organisations and bodies or experts groups established under relevant sectorial Union law as well as industry, SMEs, start-ups, civil society, researchers and social partners, and should ultimately facilitate global cooperation on standardisation in the field of AI in a manner consistent with Union values. When preparing the standardisation request, the Commission should consult the AI Office and the AI advisory Forum in order to collect relevant expertise.

Amendment  105

 

Proposal for a regulation

Recital 61 b (new)

 

Text proposed by the Commission

Amendment

 

(61 b) When AI systems are intended to be used at the workplace, harmonised standards should be limited to technical specifications and procedures.

Amendment  106

 

Proposal for a regulation

Recital 61 c (new)

 

Text proposed by the Commission

Amendment

 

(61 c) The Commission should be able to adopt common specifications under certain conditions, when no relevant harmonised standard exists or to address specific fundamental rights concerns. Through the whole drafting process, the Commission should regularly consult the AI Office and its advisory forum, the European standardisation organisations and bodies or expert groups established under relevant sectorial Union law as well as relevant stakeholders, such as industry, SMEs, start-ups, civil society, researchers and social partners.

Amendment  107

 

Proposal for a regulation

Recital 61 d (new)

 

Text proposed by the Commission

Amendment

 

(61 d) When adopting common specifications, the Commission should strive for regulatory alignment of AI with likeminded global partners, which is key to fostering innovation and cross-border partnerships within the field of AI, as coordination with likeminded partners in international standardisation bodies is of great importance.

Amendment  108

 

Proposal for a regulation

Recital 62

 

Text proposed by the Commission

Amendment

(62) In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service.

(62) In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service. To increase the trust in the value chain and to give certainty to businesses about the performance of their systems, third-parties that supply AI components may voluntarily apply for a third-party conformity assessment.

Amendment  109

 

Proposal for a regulation

Recital 64

 

Text proposed by the Commission

Amendment

(64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited.

(64) Given the complexity of high-risk AI systems and the risks that are associated to them, it is essential to develop a more adequate capacity for the application of third party conformity assessment for high-risk AI systems. However, given the current experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, or AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited

Amendment  110

 

Proposal for a regulation

Recital 65

 

Text proposed by the Commission

Amendment

(65) In order to carry out third-party conformity assessment for AI systems intended to be used for the remote biometric identification of persons, notified bodies should be designated under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence and absence of conflicts of interests.

(65) In order to carry out third-party conformity assessments when so required, notified bodies should be designated under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence, absence of conflicts of interests and minimum cybersecurity requirements. Member States should encourage the designation of a sufficient number of conformity assessment bodies, in order to make the certification feasible in a timely manner. The procedures of assessment, designation, notification and monitoring of conformity assessment bodies should be implemented as uniformly as possible in Member States, with a view to removing administrative border barriers and ensuring that the potential of the internal market is realised.

Amendment  111

 

Proposal for a regulation

Recital 65 a (new)

 

Text proposed by the Commission

Amendment

 

(65 a) In line with Union commitments under the World Trade Organization Agreement on Technical Barriers to Trade, it is adequate to maximise the acceptance of test results produced by competent conformity assessment bodies, independent of the territory in which they are established, where necessary to demonstrate conformity with the applicable requirements of the Regulation. The Commission should actively explore possible international instruments for that purpose and in particular pursue the possible establishment of mutual recognition agreements with countries which are on a comparable level of technical development, and have compatible approach concerning AI and conformity assessment.

Amendment  112

 

Proposal for a regulation

Recital 66

 

Text proposed by the Commission

Amendment

(66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification.

(66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an high-risk AI system undergoes a new conformity assessment whenever an unplanned change occurs which goes beyond controlled or predetermined changes by the provider including continuous learning and which may create a new unacceptable risk and significantly affect the compliance of the high-risk AI system with this Regulation or when the intended purpose of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification. The same should apply to updates of the AI system for security reasons in general and to protect against evolving threats of manipulation of the system, provided that they do not amount to a substantial modification

Amendment  113

 

Proposal for a regulation

Recital 67

 

Text proposed by the Commission

Amendment

(67) High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. Member States should not create unjustified obstacles to the placing on the market or putting into service of high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking.

(67) High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. For physical high-risk AI systems, a physical CE marking should be affixed, and may be complemented by a digital CE marking. For digital only high-risk AI systems, a digital CE marking should be used. Member States should not create unjustified obstacles to the placing on the market or putting into service of high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking.

Amendment  114

 

Proposal for a regulation

Recital 68

 

Text proposed by the Commission

Amendment

(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.

(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons, the environment and climate change and for society as a whole. It is thus appropriate that under exceptional reasons of protection of life and health of natural persons, environmental protection and the protection of critical infrastructure, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.

Amendment  115

 

Proposal for a regulation

Recital 69

 

Text proposed by the Commission

Amendment

(69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system in a EU database, to be established and managed by the Commission. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55 . In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report.

(69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system and foundation models in a EU database, to be established and managed by the Commission. This database should be freely and publicly accessible, easily understandable and machine-readable. The database should also be user-friendly and easily navigable, with search functionalities at minimum allowing the general public to search the database for specific high-risk systems, locations, categories of risk under Annex IV and keywords. Deployers who are public authorities or Union institutions, bodies, offices and agencies or deployers acting on their behalf and deployers who are undertakings designated as a gatekeeper under Regulation (EU)2022/1925 should also register in the EU database before putting into service or using a high-risk AI system for the first time and following each substantial modification. Other deployers should be entitled to do so voluntarily. Any substantial modification of high-risk AI systems shall also be registered in the EU database. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55. In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. The Commission should take into account cybersecurity and hazard-related risks when carrying out its tasks as data controller on the EU database. In order to maximise the availability and use of the database by the public, the database, including the information made available through it, should comply with requirements under the Directive 2019/882.

__________________

__________________

55 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).

55 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).

Amendment  116

 

Proposal for a regulation

Recital 71

 

Text proposed by the Commission

Amendment

(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.

(71) Artificial intelligence is a rapidly developing family of technologies that requires regulatory oversight and a safe and controlled space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that promotes innovation, is future-proof, and resilient to disruption, Member States should establish at least one artificial intelligence regulatory sandbox to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. It is indeed desirable for the establishment of regulatory sandboxes, whose establishment is currently left at the discretion of Member States, as a next step to be made mandatory with established criteria. That mandatory sandbox could also be established jointly with one or several other Member States, as long as that sandbox would cover the respective national level of the involved Member States. Additional sandboxes may also be established at different levels, including cross Member States, in order to facilitate cross-border cooperation and synergies. With the exception of the mandatory sandbox at national level, Member States should also be able to establish virtual or hybrid sandboxes. All regulatory sandboxes should be able to accommodate both physical and virtual products. Establishing authorities should also ensure that the regulatory sandboxes have the adequate financial and human resources for their functioning.

Amendment  117

 

Proposal for a regulation

Recital 72

 

Text proposed by the Commission

Amendment

(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.

(72) The objectives of the regulatory sandboxes should be: for the establishing authorities to increase their understanding of technical developments, improve supervisory methods and provide guidance to AI systems developers and providers to achieve regulatory compliance with this Regulation or where relevant, other applicable Union and Member States legislation, as well as with the Charter of Fundamental Rights ; for the prospective providers to allow and facilitate the testing and development of innovative solutions related to AI systems in the pre-marketing phase to enhance legal certainty, to allow for more regulatory learning by establishing authorities in a controlled environment to develop better guidance and to identify possible future improvements of the legal framework through the ordinary legislative procedure. Any significant risks identified during the development and testing of such AI systems should result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. Member States should ensure that regulatory sandboxes are widely available throughout the Union, while the participation should remain voluntary. It is especially important to ensure that SMEs and startups can easily access these sandboxes, are actively involved and participate in the development and testing of innovative AI systems, in order to be able to contribute with their knowhow and experience

Amendment  118

 

Proposal for a regulation

Recital 72 a (new)

 

Text proposed by the Commission

Amendment

 

(72 a) This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox only under specified conditions in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Prospective providers in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety, health and the environment and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the prospective providers in the sandbox should be taken into account when competent authorities decide over the temporary or permanent suspension of their participation in the sandbox whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.

Amendment  119

 

Proposal for a regulation

Recital 72 b (new)

 

Text proposed by the Commission

Amendment

 

(72 b) To ensure that Artificial Intelligence leads to socially and environmentally beneficial outcomes, Member States should support and promote research and development of AI in support of socially and environmentally beneficial outcomes by allocating sufficient resources, including public and Union funding, and giving priority access to regulatory sandboxes to projects led by civil society. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts on inequality and non-discrimination, accessibility, consumer, environmental, and digital rights, as well as academics

Amendment  120

 

Proposal for a regulation

Recital 73

 

Text proposed by the Commission

Amendment

(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.

(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Member States shall utilise existing channels and where appropriate, establish new dedicated channels for communication with SMEs, start-ups, user and other innovators to provide guidance and respond to queries about the implementation of this Regulation. Such existing channels could include but are not limited to ENISA’s Computer Security Incident Response Teams, National Data Protection Agencies, the AI-on demand platform, the European Digital Innovation Hubs and other relevant instruments funded by EU programmes as well as the Testing and Experimentation Facilities established by the Commission and the Member States at national or Union level. Where appropriate, these channels shall work together to create synergies and ensure homogeneity in their guidance to start-ups, SMEs and users. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. The Commission shall regularly assess the certification and compliance costs for SMEs and start-ups, including through transparent consultations with SMEs, start-ups and users and shall work with Member States to lower such costs. For example, translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users. Medium-sized enterprises which recently changed from the small to medium-size category within the meaning of the Annex to Recommendation 2003/361/EC (Article 16) shall have access to these initiatives and guidance for a period of time deemed appropriate by the Member States, as these new medium-sized enterprises may sometimes lack the legal resources and training necessary to ensure proper understanding and compliance with provisions.

 

Amendment  121

 

Proposal for a regulation

Recital 74

 

Text proposed by the Commission

Amendment

(74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI-on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should possibly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies.

(74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI-on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies.

Amendment  122

 

Proposal for a regulation

Recital 76

 

Text proposed by the Commission

Amendment

(76) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence.

(76) In order to avoid fragmentation, to ensure the optimal functioning of the Single market, to ensure effective and harmonised implementation of this Regulation, to achieve a high level of trustworthiness and of protection of health and safety, fundamental rights, the environment, democracy and the rule of law across the Union with regards to AI systems, to actively support national supervisory authorities, Union institutions, bodies, offices and agencies in matters pertaining to this Regulation, and to increase the uptake of artificial intelligence throughout the Union, an European Union Artificial Intelligence Office should be established. The AI Office should have legal personality, should act in full independence, should be responsible for a number of advisory and coordination tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation and should be adequately funded and staffed. Member States should provide the strategic direction and control of the AI Office through the management board of the AI Office, alongside the Commission, the EDPS, the FRA, and ENISA. An executive director should be responsible for managing the activities of the secretariat of the AI office and for representing the AI office. Stakeholders should formally participate in the work of the AI Office through an advisory forum that should ensure varied and balanced stakeholder representation and should advise the AI Office on matters pertaining to. In case the establishment of the AI Office prove not to be sufficient to ensure a fully consistent application of this Regulation at Union level as well as efficient cross-border enforcement measures, the creation of an AI agency should be considered.

Amendment  123

 

Proposal for a regulation

Recital 77

 

Text proposed by the Commission

Amendment

(77) Member States hold a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities for the purpose of supervising the application and implementation of this Regulation. In order to increase organisation efficiency on the side of Member States and to set an official point of contact vis-à-vis the public and other counterparts at Member State and Union levels, in each Member State one national authority should be designated as national supervisory authority.

(77) Each Member State should designate a national supervisory authority for the purpose of supervising the application and implementation of this Regulation. It should also represent its Member State at the management board of the AI Office. In order to increase organisation efficiency on the side of Member States and to set an official point of contact vis-à-vis the public and other counterparts at Member State and Union levels. Each national supervisory authority should act with complete independence in performing its tasks and exercising its powers in accordance with this Regulation.

Amendment  124

 

Proposal for a regulation

Recital 77 a (new)

 

Text proposed by the Commission

Amendment

 

(77 a) The national supervisory authorities should monitor the application of the provisions pursuant to this Regulation and contribute to its consistent application throughout the Union. For that purpose, the national supervisory authorities should cooperate with each other, with the relevant national competent authorities, the Commission, and with the AI Office.

Amendment  125

 

Proposal for a regulation

Recital 77 b (new)

 

Text proposed by the Commission

Amendment

 

(77 b) The member or the staff of each national supervisory authority should, in accordance with Union or national law, be subject to a duty of professional secrecy both during and after their term of office, with regard to any confidential information which has come to their knowledge in the course of the performance of their tasks or exercise of their powers. During their term of office, that duty of professional secrecy should in particular apply to trade secrets and to reporting by natural persons of infringements of this Regulation

Amendment  126

 

Proposal for a regulation

Recital 78

 

Text proposed by the Commission

Amendment

(78) In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents or any breaches to national and Union law protecting fundamental rights resulting from the use of their AI systems.

(78) In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ or evolve after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents or any breaches to national and Union law, including those protecting fundamental rights and consumer rights resulting from the use of their AI systems and take appropriate corrective actions. Deployers should also report to the relevant authorities, any serious incidents or breaches to national and Union law resulting from the use of their AI system when they become aware of such serious incidents or breaches.

Amendment  127

 

Proposal for a regulation

Recital 79

 

Text proposed by the Commission

Amendment

(79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation.

(79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. For the purpose of this Regulation, national supervisory authorities should act as market surveillance authorities for AI systems covered by this Regulation except for AI systems covered by Annex II of this Regulation. For AI systems covered by legal acts listed in the Annex II, the competent authorites under those legal acts should remain the lead authority. National supervisory authorities and competent authorities in the legal acts listed in Annex II should work together whenever necessary. When appropriate, the competent authorities in the legal acts listed in Annex II should send competent staff to the national supervisory authority in order to assist in the performance of its tasks. For the purpose of this Regulation, national supervisory authorities should have the same powers and obligations as market surveillance authorities under Regulation (EU) 2019/1020. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation. After having exhausted all other reasonable ways to assess/verify the conformity and upon a reasoned request, the national supervisory authority should be granted access to the training, validation and testing datasets, the trained and training model of the high-risk AI system, including its relevant model parameters and their execution /run environment. In cases of simpler software systems falling under this Regulation that are not based on trained models, and where all other ways to verify conformity have been exhausted, the national supervisory authority may exceptionally have access to the source code, upon a reasoned request. Where the national supervisory authority has been granted access to the training, validation and testing datasets in accordance with this Regulation, such access should be achieved through appropriate technical means and tools, including on site access and in exceptional circumstances, remote access. The national supervisory authority should treat any information, including source code, software, and data as applicable, obtained as confidential information and respect relevant Union law on the protection of intellectual property and trade secrets. The national supervisory authority should delete any information obtained upon the completion of the investigation.

Amendment  128

 

Proposal for a regulation

Recital 80

 

Text proposed by the Commission

Amendment

(80) Union legislation on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legislation, the authorities responsible for the supervision and enforcement of the financial services legislation, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council56 , it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on users of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU.

(80) Union law on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services law, the competent authorities responsible for the supervision and enforcement of the financial services law, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council56 , it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on deployers of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU.

__________________

__________________

56 Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338).

56 Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338).

Amendment  129

 

Proposal for a regulation

Recital 80 a (new)

 

Text proposed by the Commission

Amendment

 

(80 a) Given the objectives of this Regulation, namely to ensure an equivalent level of protection of health, safety and fundamental rights of natural persons, to ensure the protection of the rule of law and democracy, and taking into account that the mitigation of the risks of AI system against such rights may not be sufficiently achieved at national level or may be subject to diverging interpretation which could ultimately lead to an uneven level of protection of natural persons and create market fragmentation, the national supervisory authorities should be empowered to conduct joint investigations or rely on the union safeguard procedure provided for in this Regulation for effective enforcement. Joint investigations should be initiated where the national supervisory authority have sufficient reasons to believe that an infringement of this Regulation amount to a widespread infringement or a widespread infringement with a Union dimension, or where the AI system or foundation model presents a risk which affects or is likely to affect at least 45 million individuals in more than one Member State.

Amendment  130

 

Proposal for a regulation

Recital 82

 

Text proposed by the Commission

Amendment

(82) It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus are not required to comply with the requirements set out herein are nevertheless safe when placed on the market or put into service. To contribute to this objective, the Directive 2001/95/EC of the European Parliament and of the Council57 would apply as a safety net.

(82) It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus are not required to comply with the requirements set out for high-risk AI systems are nevertheless safe when placed on the market or put into service. To contribute to this objective, the Directive 2001/95/EC of the European Parliament and of the Council57 would apply as a safety net.

__________________

__________________

57 Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety (OJ L 11, 15.1.2002, p. 4).

57 Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety (OJ L 11, 15.1.2002, p. 4).

Amendment  131

 

Proposal for a regulation

Recital 83

 

Text proposed by the Commission

Amendment

(83) In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should respect the confidentiality of information and data obtained in carrying out their tasks.

(83) In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should aim for transparency and openness while respecting the confidentiality of information and data obtained in carrying out their tasks by putting in place technical and organisational measures to protect the security and confidentiality of the information obtained carrying out their activities including for intellectual property rights and public and national security interests. Where the activities of the Commission, national competent authorities and notified bodies pursuant to this Regulation results in a breach of intellectual property rights, Member States should provide for adequate measures and remedies to ensure the enforcement of intellectual property rights in application of Directive 2004/48/EC.

Amendment  132

 

Proposal for a regulation

Recital 84

 

Text proposed by the Commission

Amendment

(84) Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation. The European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation.

(84) Compliance with this Regulation should be enforceable by means of the imposition of fines by the national supervisory authority when carrying out proceedings under the procedure laid down in this Regulation. Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. In order to strengthen and harmonise administrative penalties for infringement of this Regulation, the upper limits for setting the administrative fines for certain specific infringements should be laid down;. When assessing the amount of the fines, national competent authorities should, in each individual case, take into account all relevant circumstances of the specific situation, with due regard in particular to the nature, gravity and duration of the infringement and of its consequences and to the provider’s size, in particular if the provider is a SME or a start-up. The European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation. The penalties and litigation costs under this Regulation should not be subject to contractual clauses or any other arrangements.

Amendment  133

 

Proposal for a regulation

Recital 84 a (new)

 

Text proposed by the Commission

Amendment

 

(84 a) As the rights and freedoms of natural and legal persons and groups of natural persons can be seriously undermined by AI systems, it is essential that natural and legal persons or groups of natural persons have meaningful access to reporting and redress mechanisms and to be entitled to access proportionate and effective remedies. They should be able to report infringments of this Regulation to their national supervisory authority and have the right to lodge a complaint against the providers or deployers of AI systems. Where applicable, deployers should provide internal complaints mechanisms to be used by natural and legal persons or groups of natural persons. Without prejudice to any other administrative or non-judicial remedy, natural and legal persons and groups of natural persons should also have the right to an effective judicial remedy with regard to a legally binding decision of a national supervisory authority concerning them or, where the national supervisory authority does not handle a complaint, does not inform the complainant of the progress or preliminary outcome of the complaint lodged or does not comply with its obligation to reach a final decision, with regard to the complaint.

Amendment  134

 

Proposal for a regulation

Recital 84 b (new)

 

Text proposed by the Commission

Amendment

 

(84 b) Affected persons should always be informed that they are subject to the use of a high-risk AI system, when deployers use a high-risk AI system to assist in decision-making or make decisions related to natural persons. This information can provide a basis for affected persons to exercise their right to an explanation under this Regulation.When deployers provide an explanation to affected persons under this Regulation, they should take into account the level of expertise and knowledge of the average consumer or individual

Amendment  135

 

Proposal for a regulation

Recital 84 c (new)

 

Text proposed by the Commission

Amendment

 

(84 c) Union law on the protection of whistleblowers (Directive (EU) 2019/1937) has full application to academics, designers, developers, project contributors, auditors, product managers, engineers and economic operators acquiring information on breaches of Union law by a provider of AI system or its AI system.

Amendment  136

 

Proposal for a regulation

Recital 85

 

Text proposed by the Commission

Amendment

(85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the techniques and approaches referred to in Annex I to define AI systems, the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making58 . In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts.

(85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making58. These consultations should involve the participation of a balanced selection of stakeholders, including consumer organisations, civil society, associations representing affected persons, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts.

__________________

__________________

58 OJ L 123, 12.5.2016, p. 1.

58 OJ L 123, 12.5.2016, p. 1.

Amendment  137

 

Proposal for a regulation

Recital 85 a (new)

 

Text proposed by the Commission

Amendment

 

(85 a) Given the rapid technological developments and the required technical expertise in conducting the assessment of high-risk AI systems, the Commission should regularly review the implementation of this Regulation, in particular the prohibited AI systems, the transparency obligations and the list of high-risk areas and use cases, at least every year, while consulting the AI office and the relevant stakeholders.

Amendment  138

 

Proposal for a regulation

Recital 87 a (new)

 

Text proposed by the Commission

Amendment

 

(87 a) As reliable information on the resource and energy use, waste production and other environmental impact of AI systems and related ICT technology, including software, hardware and in particular data centres, is limited, the Commission should introduce of an adequate methodology to measure the environmental impact and effectiveness of this Regulation in light of the Union environmental and climate objectives.

Amendment  139

 

Proposal for a regulation

Recital 89

 

Text proposed by the Commission

Amendment

(89) The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42(2) of Regulation (EU) 2018/1725 and delivered an opinion on […]”.

(89) The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42(2) of Regulation (EU) 2018/1725 and delivered an opinion on 18 June 2021.

Amendment  140

 

Proposal for a regulation

Article 1 – paragraph 1 (new)

 

Text proposed by the Commission

Amendment

 

1. The purpose of this Regulation is to promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and the rule of law, and the environment from harmful effects of artificial intelligence systems in the Union while supporting innovation;

Amendment  141

 

Proposal for a regulation

Article 1 – paragraph 1 – point d

 

Text proposed by the Commission

Amendment

(d) harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;

(d) harmonised transparency rules for certain AI systems;

Amendment  142

Proposal for a regulation

Article 1 – paragraph 1 – point e

 

Text proposed by the Commission

Amendment

(e) rules on market monitoring and surveillance.

(e) rules on market monitoring, market surveillance governance and enforcement;

Amendment  143

 

Proposal for a regulation

Article 1 – paragraph 1 – point e a (new)

 

Text proposed by the Commission

Amendment

 

(e a) measures to support innovation, with a particular focus on SMEs and start-ups, including on setting up regulatory sandboxes and targeted measures to reduce the regulatory burden on SMEs’s and start-ups;

Amendment  144

 

Proposal for a regulation

Article 1 – paragraph 1 – point e b (new)

 

Text proposed by the Commission

Amendment

 

(e b) rules for the establishment and functioning of the Union’s Artificial Intelligence Office (AI Office).

Amendment  145

 

Proposal for a regulation

Article 2 – paragraph 1 – point b

 

Text proposed by the Commission

Amendment

(b) users of AI systems located within the Union;

(b) deployers of AI systems that have their place of establishment or who are located within the Union;

Amendment  146

 

Proposal for a regulation

Article 2 – paragraph 1 – point c

 

Text proposed by the Commission

Amendment

(c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union;

(c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where either Member State law applies by virtue of a public international law or the output produced by the system is intended to be used in the Union;

Amendment  147

 

Proposal for a regulation

Article 2 – paragraph 1 – point c a (new)

 

Text proposed by the Commission

Amendment

 

(c a) providers placing on the market or putting into service AI systems referred to in Article 5 outside the Union where the provider or distributor of such systems is located within the Union;

Amendment  148

 

Proposal for a regulation

Article 2 – paragraph 1 – point c b (new)

 

Text proposed by the Commission

Amendment

 

(c b) importers and distributors of AI systems as well as authorised representatives of providers of AI systems, where such importers, distributors or authorised representatives have their establishment or are located in the Union;

Amendment  149

 

Proposal for a regulation

Article 2 – paragraph 1 – point c c (new)

 

Text proposed by the Commission

Amendment

 

(c c) affected persons as defined in Article 3(8a) that are located in the Union and whose health, safety or fundamental rights are adversely impacted by the use of an AI system that is placed on the market or put into service within the Union.

Amendment  150

 

Proposal for a regulation

Article 2 – paragraph 2 – introductory part

 

Text proposed by the Commission

Amendment

2. For high-risk AI systems that are safety components of products or systems, or which are themselves products or systems, falling within the scope of the following acts, only Article 84 of this Regulation shall apply:

2. For high-risk AI systems that are safety components of products or systems, or which are themselves products or systems and that fall, within the scope of harmonisation legislation listed in Annex II - Section B, only Article 84 of this Regulation shall apply;

Amendment  151

 

Proposal for a regulation

Article 2 – paragraph 2 – point a

 

Text proposed by the Commission

Amendment

(a) Regulation (EC) 300/2008;

deleted

Amendment  152

 

Proposal for a regulation

Article 2 – paragraph 2 – point b

 

Text proposed by the Commission

Amendment

(b) Regulation (EU) No 167/2013;

deleted

Amendment  153

 

Proposal for a regulation

Article 2 – paragraph 2 – point c

 

Text proposed by the Commission

Amendment

(c) Regulation (EU) No 168/2013;

deleted

Amendment  154

 

Proposal for a regulation

Article 2 – paragraph 2 – point d

 

Text proposed by the Commission

Amendment

(d) Directive 2014/90/EU;

deleted

Amendment  155

 

Proposal for a regulation

Article 2 – paragraph 2 – point e

 

Text proposed by the Commission

Amendment

(e) Directive (EU) 2016/797;

deleted

Amendment  156

 

Proposal for a regulation

Article 2 – paragraph 2 – point f

 

Text proposed by the Commission

Amendment

(f) Regulation (EU) 2018/858;

deleted

Amendment  157

 

Proposal for a regulation

Article 2 – paragraph 2 – point g

 

Text proposed by the Commission

Amendment

(g) Regulation (EU) 2018/1139;

deleted

Amendment  158

 

Proposal for a regulation

Article 2 – paragraph 2 – point h

 

Text proposed by the Commission

Amendment

(h) Regulation (EU) 2019/2144.

deleted

Amendment  159

 

Proposal for a regulation

Article 2 – paragraph 4

 

Text proposed by the Commission

Amendment

4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States.

4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States and are subject of a decision of the Commission adopted in accordance with Article 36 of Directive (EU)2016/680 or Article 45 of Regulation 2016/679 (adequacy decision) or are part of an international agreement concluded between the Union and that third country or international organisation pursuant to Article 218 TFUE providing adequate safeguards with respect to the protection of privacy and fundamental rights and freedoms of individuals;

Amendment  160

 

Proposal for a regulation

Article 2 – paragraph 5 a (new)

 

Text proposed by the Commission

Amendment

 

5 a. Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processes in connection with the rights and obligations laid down in this Regulation. This Regulation shall not affect Regulations (EU) 2016/679 and (EU) 2018/1725 and Directives 2002/58/EC and (EU) 2016/680, without prejudice to arrangements provided for in Article 10(5) and Article 54 of this Regulation.;

Amendment  161

 

Proposal for a regulation

Article 2 – paragraph 5 b (new)

 

Text proposed by the Commission

Amendment

 

5 b. This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety;

Amendment  162

 

Proposal for a regulation

Article 2 – paragraph 5 c (new)

 

Text proposed by the Commission

Amendment

 

5 c. This regulation shall not preclude Member States or the Union from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or to encourage or allow the application of collective agreements which are more favourable to workers.

Amendment  163

 

Proposal for a regulation

Article 2 – paragraph 5 d (new)

 

Text proposed by the Commission

Amendment

 

5 d. This Regulation shall not apply to research, testing and development activities regarding an AI system prior to this system being placed on the market or put into service, provided that these activities are conducted respecting fundamental rights and the applicable Union law. The testing in real world conditions shall not be covered by this exemption.The Commission is empowered to may adopt delegated acts in accordance with Article 73 that clarify the application of this paragraph to specify this exemption to prevent its existing and potential abuse. The AI Office shall provide guidance on the governance of research and development pursuant to Article 56, also aiming to coordinate its application by the national supervisory authorities;

Amendment  164

 

Proposal for a regulation

Article 2 – paragraph 5 e (new)

 

Text proposed by the Commission

Amendment

 

5 e. This Regulation shall not apply to AI components provided under free and open-source licences except to the extent they are placed on the market or put into service by a provider as part of a high-risk AI system or of an AI system that falls under Title II or IV. This exemption shall not apply to foundation models as defined in Art 3.

Amendment  165