Amendments adopted by the European Parliament on 26 March 2026 on the proposal for a regulation of the European Parliament and of the Council amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI) (COM(2025)0836 – C10-0304/2025 – 2025/0359(COD))(1)
(Ordinary legislative procedure: first reading)
Text proposed by the Commission
Amendment
Amendment 1 Proposal for a regulation Recital 3
(3) Consequently, targeted amendments to Regulation (EU) 2024/1689 are necessary to address certain implementation challenges, with a view to the effective application of the relevant rules.
(3) Consequently, targeted amendments to Regulation (EU) 2024/1689 are necessary to address certain implementation challenges, with a view to the effective, simple and uniform application of the relevant rules.
Amendment 2 Proposal for a regulation Recital 3 a (new)
(3a) Additionally, the Commission, the AI Office and Member States’ competent authorities should ensure that supervision, enforcement and monitoring of sectorial and national laws do not create overlaps, inconsistent interpretations or divergent enforcement in order to enable AI innovation in the private and public sector.
Amendment 3 Proposal for a regulation Recital 4
(4) Enterprises outgrowing the micro, small and medium-sized enterprises (‘SME’) definition – the ‘small mid-cap enterprises’ (‘SMCs’) – play a vital role in the Union’s economy. Compared to SMEs, SMCs tend to demonstrate a higher pace of growth, and level of innovation and digitisation. Nevertheless, they face challenges similar to SMEs in relation to administrative burden, leading to a need for proportionality in the implementation of Regulation (EU) 2024/1689 and for targeted support. To enable the smooth transition of enterprises from SMEs into SMCs, it is important to address in a coherent manner the effect that regulation may have on their activity once those enterprises outgrow the segment of SMEs and are faced with rules that apply to large enterprises. Regulation (EU) 2024/1689 provides for several measures for small-scale providers, which should be extended to SMCs. In order to clarify the treatment of SMEs and SMCs in Regulation (EU) 2024/1689, it is necessary to introduce definitions for SMEs and SMCs, which should correspond to the definition set out in the Annex to Commission Recommendation 2003/361/EC4 and Annex to Commission Recommendation 2025/3500/EC5 .
(4) 99,8% of all Union companies are small and medium-sized enterprises, the majority of which are micro and small enterprises.3a Enterprises outgrowing the micro, small and medium-sized enterprises (‘SME’) definition – the ‘small mid-cap enterprises’ (‘SMCs’) – play a vital role in the Union’s economy. Compared to SMEs, SMCs tend to demonstrate a higher pace of growth, and level of innovation and digitisation. Nevertheless, they face challenges similar to SMEs in relation to administrative burden, leading to a need for proportionality in the implementation of Regulation (EU) 2024/1689 and for targeted support. To enable the smooth transition of enterprises from SMEs into SMCs, it is important to address in a coherent manner the effect that regulation may have on their activity once those enterprises outgrow the segment of SMEs and are faced with rules that apply to large enterprises. Regulation (EU) 2024/1689 provides for several measures for small-scale providers, which should be extended to SMCs where appropriate while safeguarding the overarching objectives and level of protection afforded under Regulation (EU) 2024/16893b. In order to clarify the treatment of SMEs and SMCs in Regulation (EU) 2024/1689, it is necessary to introduce definitions for SMEs and SMCs, which should correspond to the definition set out in the Annex to Commission Recommendation 2003/361/EC4 and Annex to Commission Recommendation 2025/3500/EC5 .
4 Commission Recommendation of 6 May 2003 concerning the definition of micro, small and medium-sized enterprises (OJ L 124, 20.5.2003, pp. 36–41, ELI: http://data.europa.eu/eli/reco/2003/361/oj).
4 Commission Recommendation of 6 May 2003 concerning the definition of micro, small and medium-sized enterprises (OJ L 124, 20.5.2003, pp. 36–41, ELI: http://data.europa.eu/eli/reco/2003/361/oj).
5 Commission Recommendation (EU) 2025/1099 of 21 May 2025 on the definition of small mid-cap enterprises (OJ L, 2025/1099, 28.5.2025, ELI: http://data.europa.eu/eli/reco/2025/1099/oj).
5 Commission Recommendation (EU) 2025/1099 of 21 May 2025 on the definition of small mid-cap enterprises (OJ L, 2025/1099, 28.5.2025, ELI: http://data.europa.eu/eli/reco/2025/1099/oj).
Amendment 4 Proposal for a regulation Recital 5
(5) Article 4 of Regulation (EU) 2024/1689 currently imposes an obligation on all providers and deployers of AI systems to ensure AI literacy of their staff. AI literacy development starting from education and training and continuing in a lifelong learning manner is crucial to equip providers, deployers and other affected persons with the necessary notions to make informed decisions regarding AI systems deployment. However, experience shared by stakeholders reveals that a one-size-fits-all solution is not suitable for all types of providers and deployers in relation to the promotion of AI literacy, rendering such a horizontal obligation ineffective in achieving the objective pursued by this provision. Moreover, data indicate that imposing such an obligation creates an additional compliance burden, particularly for smaller enterprises, whereas AI literacy should be a strategic priority, regardless of regulatory obligations and potential sanctions. In light of that, Article 4 of Regulation (EU) 2024/1689 should be amended to require the Member States and the Commission, without prejudice to their respective competences, to individually, collectively and in cooperation with relevant stakeholders encourage providers and deployers to provide a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, including through offering training opportunities, providing informational resources, and allowing exchange of good practices and other non-legally binding initiatives. The European Artificial Intelligence Board (‘Board’) will ensure recurrent exchange between the Commission and Member States on the topic, while the Apply AI Alliance will allow discussion with the wider community. This amendment is without prejudice to the broader measures taken by the Commission and the Member States to promote AI literacy and competences for the wider population, including learners, students, and citizens at different ages and in particular through education and training systems.
(5) Article 4 of Regulation (EU) 2024/1689 currently imposes an obligation on all providers and deployers of AI systems to ensure AI literacy of their staff. AI literacy development starting from education and training and continuing in a lifelong learning manner is crucial to equip providers, deployers and other affected persons with the necessary skills to make informed decisions regarding AI systems deployment. However, experience shared by stakeholders reveals that a solution imposing stringent obligations to ensure a sufficient level of AI literacy is not suitable for all types of providers and deployers in relation to the promotion of AI literacy. In light of that, Article 4 of Regulation (EU) 2024/1689 should be amended to require providers and deployers of AI systems to support AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. The European Commission should promote AI literacy and competences for the wider population, and in order to support, facilitate and complement the efforts of providers, should be tasked to issue guidance on the practical implementation regarding the obligation on providers and deployers of AI systems, and should, together with the Member States, encourage and support AI literacy in society. This should include facilitating and complementing the efforts of providers and deployers of AI systems, in particular SMEs, as the implementation of the relevant obligations poses particular challenges for them. One possibility to facilitate AI literacy in the Union could be the creation of Public Private Partnerships (PPPs).
Amendment 5 Proposal for a regulation Recital 5 a (new)
(5a) AI systems that alter, manipulate or artificially generates realistic images or videos depicting sexually explicit activities, or the intimate parts of an identifiable natural person, without that person’s consent, cause harm to victims and violate fundamental rights to dignity and privacy. The proliferation of such technologies, often marketed as 'nudification’ applications, has created an urgent need for explicit regulatory prohibition. Regulation (EU) 2024/1689 establishes a framework for prohibited AI practices, which is to be kept under review. This is without prejudice towards the rights, freedoms and principles recognised by Article 6 TEU and the Charter of Fundamental Rights of the European Union, and the exercise of the rights guaranteed therein to freedom of expression and information and the freedom of the arts and sciences. This prohibition should not apply to providers or deployers of AI systems who have put in place effective safety measures, such as technical and organisational measures, to prevent the generation of such depictions and to avoid continuously misuse, after the system has been placed, on the market or put into service, despite the intention of the provider or deployer. Moreover, this prohibition should not prevent AI providers from developing their technical capabilities to alter, manipulate or artificially generate images or videos.
Amendment 6 Proposal for a regulation Recital 6
(6) Bias detection and correction constitute a substantial public interest because they protect natural persons from biases’ adverse effects, including discrimination. Discrimination might result from the bias in AI models and AI systems other than high-risk AI systems for which of Regulation (EU) 2024/1689 already provides a legal basis authorising the processing of special categories of personal data under Article 9(2), point (g), of Regulation (EU) 2016/679 of the European Parliament and of the Council6. Given that discrimination might result also from those other AI systems and models, it is therefore appropriate that Regulation (EU) 2024/1689 should provide for a legal basis for the processing of special categories of personal data also by providers and deployers of other AI systems and AI models as well as deployers of high-risk AI systems. The legal basis is established in compliance with Article 9(2), point (g) of Regulation (EU) 2016/679 Article 10(2), point (g) of Regulation (EU) 2018/1725 of the European Parliament and of the Council7 and Article 10, point (a) of Directive (EU) 2016/680 of the European Parliament and of the Council8provides a legal basis allowing, where necessary for the detection and removal of bias, the processing of special categories of personal data by providers and deployers of all AI systems and models, subject to appropriate safeguards that complement Regulations (EU) 2016/679, Regulation (EU) 2018/1725 and Directive (EU) 2016/680, as applicable.
(6) Bias detection and correction constitute a substantial public interest because they protect natural persons from biases’ adverse effects, including discrimination. For that reason, Regulation (EU) 2024/1689 already provides a legal basis authorising the providers of high-risk AI systems to process special categories of personal data in certain exceptional cases and subject to strict safeguards. This legal basis is linked to those providers’ obligation to establish practices concerning the detection, prevention and mitigation of biases likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law. Accordingly, a substantial public interest exists to permit, where strictly necessary, the processing of special categories of personal data for the purposes of bias detection and correction. It is therefore necessary to extend the legal basis established under Regulation (EU) 2024/1689 so that it also applies to the also by providers and deployers of other AI systems and AI models. That legal basis should be subject to the same conditions and safeguards as apply under the existing Article 10(5), thereby ensuring compliance with Article 9(2), point (g) of Regulation (EU) 2016/679 Article 10(2), point (g) of Regulation (EU) 2018/1725 of the European Parliament and of the Council and Article 10, point (a) of Directive (EU) 2016/680 of the European Parliament and of the Council.
__________________
__________________
6 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1, ELI: http://data.europa.eu/eli/reg/2016/679/oj).
6 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1, ELI: http://data.europa.eu/eli/reg/2016/679/oj).
7 Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39, ELI: http://data.europa.eu/eli/reg/2018/1725/oj).
7 Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39, ELI: http://data.europa.eu/eli/reg/2018/1725/oj).
8 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (OJ L 119, 4.5.2016, pp. 89–131, ELI: http://data.europa.eu/eli/dir/2016/680/oj).
8 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (OJ L 119, 4.5.2016, pp. 89–131, ELI: http://data.europa.eu/eli/dir/2016/680/oj).
Amendment 7 Proposal for a regulation Recital 7
(7) In order to ensure consistency, avoid duplication and minimise administrative burdens in relation to the procedure for designating notified bodies under Regulation (EU) 2024/1689, while maintaining the same level of scrutiny, a single application and a single assessment procedure should be available for new conformity assessment bodies and notified bodies which are designated under the Union harmonisation legislation listed in Section A of Annex I to Regulation (EU) 2024/1689, such as under Regulations (EU) 2017/7459and (EU) 2017/74610of the European Parliament and of the Council, where such a procedure is established under that Union harmonisation legislation. The single application and assessment procedure aims at facilitating, supporting and expediting the designation procedure under Regulation (EU) 2024/1689, while ensuring compliance with the requirements applicable to notified bodies under that Regulation and the Union harmonisation legislation listed in Section A of Annex I thereto.
deleted
__________________
9Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1, ELI: http://data.europa.eu/eli/reg/2017/745/oj).
10Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176, ELI: http://data.europa.eu/eli/reg/2017/746/oj).
Amendment 8 Proposal for a regulation Recital 8
(8) With a view to ensuring the smooth application and consistency of Regulation (EU) 2024/1689, amendments should be made to it. A technical correction to Article 43(3), first subparagraph, of Regulation (EU) 2024/1689 should be added to align the conformity assessment requirements with the requirements of providers of high-risk AI systems in Article 16 of that Regulation. Moreover, it should be clarified that where a provider of a high-risk AI system is subject to the conformity assessment procedure under Union harmonisation legislation listed in Section A of Annex I to Regulation (EU) 2024/1689, and the conformity assessment extends to compliance of the quality management system of that Regulation and of such Union harmonisation legislation, the provider should be able to include aspects related to quality management systems under that Regulation as part of the quality management systems under such Union harmonisation legislation, in line with Article 17(3) of Regulation (EU) 2024/1689. Article 43(3), second subparagraph, should be amended to clarify that notified bodies which have been notified under the Union harmonisation legislation listed in Section A of Annex I to Regulation (EU) 2024/1689 and which aim to assess high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I to that Regulation, should apply for the designation as a notified body under that Regulation within 18 months from [the entry into application of this Regulation]. This amendment is without prejudice to Article 28 of Regulation (EU) 2024/1689. Moreover, Regulation (EU) 2024/1689 should be amended to clarify that where a high-risk AI system is both covered by the Union harmonisation legislation listed in Section A of Annex I to Regulation (EU) 2024/1689 and falls within one of the use-cases listed in Annex III to that Regulation, the provider should follow the relevant conformity assessment procedure as required under that relevant harmonisation legislation.
(8) deleted
Amendment 9 Proposal for a regulation Recital 8 a (new)
(8a) Regulation (EU) 2024/1689 and Regulation (EU) 2024/2847 complement each other so that the safety and cybersecurity of products with digital elements is ensured. It is necessary to ensure the alignment of Regulation (EU) 2024/1689 and Regulation (EU) 2024/2847, to allow for their smooth implementation. Where high-risk AI systems fulfil the essential cybersecurity requirements set out in Regulation (EU) 2024/2847, they should be deemed to comply with the cybersecurity requirements set out in Article 15 of Regulation (EU) 2024/1689 in so far as those requirements are covered by the EU declaration of conformity or parts thereof issued pursuant to Regulation (EU) 2024/2847.
Amendment 10 Proposal for a regulation Recital 8 b (new)
(8b) For the purposes of this Regulation, the fact that an AI system is integrated into, or operates within, a product subject to Union harmonisation legislation on product safety should not, in itself, imply that the AI system performs a safety function. An AI system should be regarded as performing a safety function only where its functioning is necessary to ensure that the product or the AI system complies with applicable Union safety requirements. By contrast, functionalities intended solely for user assistance, performance optimisation, service efficiency, automation, convenience, or quality control of non-safety-related aspects should not be regarded as safety functions under this Regulation, where their failure would not directly create risks to health or safety.
Amendment 11 Proposal for a regulation Recital 9
(9) To streamline compliance and reduce the associated costs, providers of AI systems should not be required to register AI systems referred to in Article 6(3) of Regulation (EU) 2024/1689 in the EU database pursuant to Article 49(2) of that Regulation. Given that such systems are not considered high-risk under certain conditions where they do not pose significant risk of harm to the health, safety or fundamental rights of persons, imposing registration requirements would constitute a disproportionate compliance burden. Nevertheless, a provider who considers that an AI system falls under Article 6(3) remains obligated to document its assessment before that system is placed on the market or put into service. This assessment may be requested by national competent authorities.
(9) To streamline compliance and reduce the associated costs, the registration of AI systems referred to in Article 6(3) of Regulation (EU) 2024/1689 in the EU database pursuant to Article 49(2) of that Regulation should be simplified by streamlining the required content in Section B of Annex VIII to that Regulation. While it remains crucial for effective market surveillance and public accountability that such AI systems are registered in the EU database, the registration requirements should be simplified and made more proportionate. This simplification will strike a better balance without undermining the protection laid down by Regulation 2024/1689. Such systems are not considered high-risk under certain conditions where they do not pose significant risk of harm to the health, safety or fundamental rights of persons. Furthermore, a provider applying Article 6(3) remains obligated to document its assessment before that system is placed on the market or put into service. This assessment may be requested by national competent authorities.
Amendment 12 Proposal for a regulation Recital 10
(10) Articles 57, 58 and 60 of Regulation (EU) 2024/1689 should be amended to strengthen further cooperation at Union level of AI regulatory sandboxes, foster clarity and consistency in the governance of AI regulatory sandboxes, and to extend the scope of real-world testing outside AI regulatory sandboxes to high-risk AI systems covered by the Union harmonisation legislation listed in Annex I to that Regulation. In particular, to allow procedural simplification, where applicable, in the projects supervised in the AI regulatory sandboxes that include also real-world testing, the real-world testing plan should be integrated in the sandbox plan agreed by the providers or prospective providers and the competent authority in a single document. In addition, it is appropriate to provide for the possibility of the AI Office to establish an AI regulatory sandbox at Union level for AI systems that are covered by Article 75(1) of Regulation (EU) 2024/1689. By leveraging these infrastructures and facilitating cross-border collaboration, coordination would be better streamlined and resources optimally utilised.
(10) Articles 57, 58 and 60 of Regulation (EU) 2024/1689 should be amended to strengthen further cooperation at Union level of AI regulatory sandboxes, foster clarity and consistency in the governance of AI regulatory sandboxes, and to extend the scope of real-world testing outside AI regulatory sandboxes to high-risk AI systems covered by the Union harmonisation legislation listed in Annex I to that Regulation. In particular, to allow procedural simplification, where applicable, in the projects supervised in the AI regulatory sandboxes that include also real-world testing, the real-world testing plan should be integrated in the sandbox plan agreed by the providers or prospective providers and the competent authority in a single document. In addition, it is appropriate to provide for the possibility of the AI Office to establish an AI regulatory sandbox at Union level for AI systems that are covered by Article 75(1) of Regulation (EU) 2024/1689. When discussions are held within the framework of the Board, the European Data Protection Supervisor and the AI Office, as part of their roles within the board, should provide feedback and exchange best practices on matters related to the establishment and operation of AI regulatory sandboxes that were established under their respective competences. By leveraging these infrastructures and facilitating cross-border collaboration, coordination would be streamlined and resources optimally utilised. In order to foster innovation and facilitate the uptake of AI, SMEs, including startups, and SMCs should be provided with priority access to the AI regulatory sandboxes established by the AI Office.
Where AI regulatory sandboxes, including the controlled environment to foster innovation, involve innovative AI systems that process personal data, the relevant national supervisory authorities should be involved in accordance with their tasks and powers.
Amendment 13 Proposal for a regulation Recital 11
(11) To foster innovation, it is also appropriate to extend the scope of real-world testing outside AI regulatory sandboxes in Article 60 of Regulation (EU) 2024/1689, currently applicable to high-risk AI systems listed in Annex III to that Regulation, and allow providers and prospective providers of high-risk AI systems covered by the Union harmonisation legislation listed in Annex I to that Regulation to also test such systems in real-world conditions. This is without prejudice to other Union or national law on the testing in real-world conditions of high-risk AI systems related to products covered by that Union harmonisation legislation. To address the specific situation of high-risk AI systems covered the Union harmonisation legislation listed in Section B of Annex I to that Regulation, it is necessary to allow the conclusion of voluntary agreements between the Commission and Member States to enable testing of such high-risk AI systems in real-world conditions.
(11) To foster innovation, it is also appropriate to extend the scope of real-world testing outside AI regulatory sandboxes in Article 60 of Regulation (EU) 2024/1689, currently applicable to high-risk AI systems listed in Annex III to that Regulation, and allow providers and prospective providers of high-risk AI systems covered by the Union harmonisation legislation listed in Annex I to that Regulation to also test such systems in real-world conditions. This is without prejudice to other Union or national law on the testing in real-world conditions of high-risk AI systems related to products covered by that Union harmonisation legislation. To address the specific situation of high-risk AI systems covered the Union harmonisation legislation listed in Section B of Annex I to that Regulation, it is necessary to allow the conclusion of voluntary agreements between the Commission and Member States to enable testing of such high-risk AI systems in real-world conditions, subject to sufficient safeguards.
Amendment 14 Proposal for a regulation Recital 12 a (new)
(12a) In order to allow the AI Office to effectively exercise its duties under Regulation (EU) 2024/1689 and in light of the new powers conferred on it by this Regulation, adequate human, financial and technical resources should be provided, without prejudice to the budgetary procedure and existing financial instruments. In particular, the AI Office should have a sufficient number of personnel whose expertise include an in-depth understanding of AI technologies.
Amendment 15 Proposal for a regulation Recital 13
(13) Article 69 of Regulation (EU) 2024/1689 should be amended to simplify the fee structure of the scientific panel. If Member States call upon the panel’s expertise, the fees they may be required to pay the experts should be equivalent to the remuneration the Commission is obliged to pay in similar circumstances. Furthermore, to reduce the procedural complexity, Member States should be able to consult the experts of the scientific panel directly, without involvement of the Commission.
(13) Article 69 of Regulation (EU) 2024/1689 should be amended to simplify the fee structure of the scientific panel. If Member States call upon the panel’s expertise, the fees they may be required to pay the experts should be equivalent to the remuneration the Commission is obliged to pay in similar circumstances.
Amendment 16 Proposal for a regulation Recital 14
(14) In order to strengthen the governance system for AI systems based on general-purpose AI models, it is necessary to clarify the role of the AI Office in monitoring and supervising compliance of such AI systems with Regulation (EU) 2024/1689, while excluding AI systems related to products covered by the Union harmonisation legislation listed in Annex I to that Regulation. While sectoral authorities continue to remain responsible for the supervision of AI systems related to products covered by that Union harmonisation legislation, Article 75(1) Regulation (EU) 2024/1689 should be modified to bring all AI systems based on general-purpose AI models developed by the same provider within the scope of the AI Office's supervision. This does not include AI systems placed on the market, put into service or used by Union institutions, bodies, offices or agencies, which are under the supervision of the European Data Protection Supervisor pursuant to Article 74(9) of Regulation (EU) 2024/1689. To ensure effective supervision for those AI systems in accordance with the tasks and responsibilities assigned to market surveillance authorities under Regulation (EU) 2024/1689, the AI Office should be empowered to take the appropriate measures and decisions to adequately exercise its powers provided for in that Section and Regulation (EU) 2019/1020 of the European Parliament and of the Council11 . Article 14 of Regulation (EU) 2019/1020 should apply mutatis mutandis. Furthermore, to ensure effective enforcement, the authorities involved in the application of Regulation (EU) 2024/1689 should cooperate actively in the exercise of those powers, in particular where enforcement actions need to be taken in the territory of a Member State.
(14) In order to strengthen the governance system for AI systems based on general-purpose AI models, it is necessary to clarify the role of the AI Office in monitoring and supervising compliance of such AI systems with Regulation (EU) 2024/1689, while excluding AI systems related to products covered by the Union harmonisation legislation listed in Annex I and AI systems referred to in Annex III, point 2 to that Regulation. While sectoral authorities continue to remain responsible for the supervision of AI systems related to products covered by that Union harmonisation legislation, Article 75(1) Regulation (EU) 2024/1689 should be modified to bring all AI systems based on general-purpose AI models developed by the same provider within the scope of the AI Office's supervision. This does not include AI systems placed on the market, put into service or used by Union institutions, bodies, offices or agencies, which are under the supervision of the European Data Protection Supervisor pursuant to Article 74(9) of Regulation (EU) 2024/1689. To ensure effective supervision for those AI systems in accordance with the tasks and responsibilities assigned to market surveillance authorities under Regulation (EU) 2024/1689, the AI Office should take the appropriate measures and decisions to adequately exercise its powers provided for in that Section and Regulation (EU) 2019/1020 of the European Parliament and of the Council11. Article 14 of Regulation (EU) 2019/1020 should apply mutatis mutandis. Furthermore, to ensure effective enforcement, the authorities involved in the application of Regulation (EU) 2024/1689 should cooperate actively in the exercise of those powers, in particular where enforcement actions need to be taken in the territory of a Member State.
__________________
__________________
11 Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1, ELI: http://data.europa.eu/eli/reg/2019/1020/oj).
11 Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1, ELI: http://data.europa.eu/eli/reg/2019/1020/oj).
Amendment 17 Proposal for a regulation Recital 16
(16) To further operationalise the AI Office’s supervision and enforcement set out in Article 75(1) of Regulation (EU) 2024/1689, it is necessary to further define the which of the powers listed in Article 14 of Regulation (EU) 2019/1020 should be conferred upon the AI Office. The Commission should therefore be empowered to adopt implementing acts to specify those powers, including the ability to impose penalties, such as fines or other administrative sanctions, in accordance with the conditions and ceilings referred to in Article 99, and applicable procedures. This should ensure that the AI Office has the necessary tools to effectively monitor and supervise compliance with Regulation (EU) 2024/1689.
(16) To further operationalise the AI Office’s supervision and enforcement set out in Article 75(1) of Regulation (EU) 2024/1689, it is necessary to further define which of the powers listed in Article 14 of Regulation (EU) 2019/1020 should be conferred upon the AI Office. The Commission should therefore be empowered to adopt implementing acts to specify those powers, including the ability to impose penalties, such as fines or other administrative sanctions, in accordance with the conditions and ceilings referred to in Article 99, and applicable procedures. This should ensure that the AI Office has the necessary tools to effectively monitor and supervise compliance with Regulation (EU) 2024/1689.
Amendment 18 Proposal for a regulation Recital 18
(18) To enable access to Union market for AI systems which are under the supervision by the AI Office pursuant to Article 75 of Regulation (EU) 2024/1689 and subject to third party conformity assessment, the Commission should be enabled to carry out pre-market conformity assessments of those systems.
(18) To enable access to Union market for AI systems which are under the supervision by the AI Office pursuant to Article 75 of Regulation (EU) 2024/1689 and subject to third party conformity assessment, the Commission should ensure that pre-market conformity assessments are carried out for those systems. Furthermore, the AI Office should maintain organised records of communications with providers and deployers of general-purpose AI models with systemic risk. Such records should be documented in a consistent manner.
Amendment 19 Proposal for a regulation Recital 19
(19) Article 77 and related provisions of Regulation (EU) 2024/1689 constitute an important governance mechanism, as they aim to enable authorities or bodies responsible for enforcing or supervising Union law intended to protect fundamental rights to fulfil their mandate under specific conditions and to foster cooperation with market surveillance authorities responsible for the supervision and enforcement of that Regulation. It is necessary to clarify the scope of such cooperation, as well as to clarify which public authorities or bodies benefit from it. With a view to reinforcing the cooperation, it should be clarified that requests to access information and documentation should be made to the competent market surveillance authority, which should respond to such requests, and that the involved authorities or bodies should have a mutual obligation to cooperate.
(19) Article 77 and related provisions of Regulation (EU) 2024/1689 constitute an important governance mechanism, as they aim to enable authorities or bodies responsible for enforcing or supervising Union law intended to protect fundamental rights to fulfil their mandate under specific conditions and to foster cooperation with market surveillance authorities responsible for the supervision and enforcement of that Regulation. It is necessary to clarify the scope of such cooperation, as well as to clarify which public authorities or bodies benefit from it. With a view to reinforcing the cooperation, it should be clarified that requests to access information and documentation should be made to the competent market surveillance authority, which should respond to such requests without undue delay, and that the involved authorities or bodies should have a mutual obligation to cooperate. It should be clarified that these provisions are without prejudice to the tasks, powers and independence of the relevant national public authorities or bodies under their mandates. In particular, those provisions do not limit any powers that those authorities and bodies have to request information pursuant to other Union or national law. Accordingly, those authorities and bodies retain any power they have to directly request information from operators pursuant to their mandate or other law.
Amendment 20 Proposal for a regulation Recital 20
(20) To allow sufficient time for providers of generative AI systems subject to the marking obligations laid down in Article 50(2) of Regulation (EU) 2024/1689 to adapt their practices within a reasonable time without disrupting the market, it is appropriate to introduce a transitional period of 6 months for providers who have already placed their systems on the market before the 2 August 2026.
(20) To allow sufficient time for providers of generative AI systems subject to the marking obligations laid down in Article 50(2) of Regulation (EU) 2024/1689 to adapt their practices within a reasonable time without disrupting the market, it is appropriate to introduce a transitional period of 3 months for providers who have already placed their systems on the market before the 2 August 2026.
Amendment 21 Proposal for a regulation Recital 22
(22) Article 113 of Regulation (EU) 2024/1689 establishes the dates of entry into force and application of that Regulation, notably that the general date of application is 2 August 2026. For the obligations related to high-risk AI systems laid down in Sections 1, 2 and 3 of Chapter III of Regulation (EU) 2024/1689, the delayed availability of standards, common specifications, and alternative guidance and the delayed establishment of national competent authorities lead to challenges that jeopardise those obligation’s effective entry into application and that risk to significantly increase implementation costs in a way that does not justify maintaining their initial date of application, namely 2 August 2026. Building on experience, it is appropriate to put in place a mechanism that links the entry into application to the availability of measures in support of compliance with Chapter III, which may include harmonised standards, common specifications, and Commission guidelines. This should be confirmed by the Commission by decision, following which the rules obligations for high-risk AI systems should apply after 6 months as regards AI systems classified as high-risk pursuant to Article 6(2) and Annex III and after 12 months as regards AI systems classified as high-risk pursuant to Article 6(1) and Annex I to Regulation (EU) 2024/1689. However, this flexibility should only be extended until 2 December 2027 as regards AI systems classified as high-risk pursuant to Article 6(2) and Annex III and until 2 August 2028 as regards AI systems classified as high-risk pursuant to Article 6(1) and Annex I to that Regulation, by which dates those rules should enter into application in any case. The distinction between the entry into application of the rules as regards AI systems classified as high-risk pursuant to Article 6(2) and Annex III and Article 6(1) and Annex I to that Regulation is consistent with the difference between the initial dates of application envisaged in Regulation (EU) 2024/1689 and aims to provide the necessary time for adaptation and implementation of the corresponding obligations.
(22) Article 113 of Regulation (EU) 2024/1689 establishes the dates of entry into force and application of that Regulation, notably that the general date of application is 2 August 2026. For the obligations related to high-risk AI systems laid down in Sections 1, 2 and 3 of Chapter III of Regulation (EU) 2024/1689, the delayed availability of standards, common specifications, and alternative guidance and the delayed establishment of national competent authorities lead to challenges that jeopardise those obligation’s effective entry into application and that risk to significantly increase implementation costs in a way that does not justify maintaining their initial date of application, namely 2 August 2026. It is appropriate that thedateof application of obligations on AI systems classified as high-risk pursuant to Article 6(2) and Annex III and on AI systems classified as high-risk pursuant to Article 6(1) and Annex I to Regulation (EU) 2024/1689 is postponed until 2 December 2027 as regards AI systems classified as high-risk pursuant to Article 6(2) and Annex III and until 2 August 2028 as regards AI systems classified as high-risk pursuant to Article 6(1) and Annex I to that Regulation. The distinction between the entry into application of the rules as regards AI systems classified as high-risk pursuant to Article 6(2) and Annex III and Article 6(1) and Annex I to that Regulation is consistent with the difference between the initial dates of application envisaged in Regulation (EU) 2024/1689 and aims to provide the necessary time for adaptation and implementation of the corresponding obligations.
Amendment 22 Proposal for a regulation Recital 22 a (new)
(22a) In order to ensure legal certainty and to avoid further delays in application of this Regulation, the Commission should ensure that measures in support of compliance with regard to Chapter III, Sections 1, 2, and 3 are in place in due time to ensure timely and effective implementation of the necessary provisions.
Amendment 23 Proposal for a regulation Recital 23
(23) In light of the objective to reduce implementation challenges for citizens, businesses and public administrations, it is essential that harmonised conditions for the implementation of certain rules are adopted only where strictly necessary. For that purpose, it is appropriate to remove certain empowerments bestowed on the Commission to adopt such harmonised conditions by means of implementing acts in cases where those conditions are not met. Regulation (EU) 2024/1689 should therefore be amended to remove the empowerments conferred on the Commission in Article 50(7), Article 56(6), and Article 72(3) thereof to adopt implementing acts. The removal of the empowerment to adopt a harmonised template for a post-market monitoring plan in Article 72(3) of Regulation (EU) 2024/1689 has as an additional benefit that it will offer more flexibility for providers of high-risk AI systems to put in place a system for post-market monitoring that is tailored to their organisation. At the same time, recognising the need to offer clarity how providers of high-risk AI systems are required to comply, the Commission should be required to publish guidance.
(23) In light of the objective to reduce implementation challenges for citizens, businesses and public administrations, it is essential that harmonised conditions for the implementation of certain rules are adopted only where strictly necessary. For that purpose, it is appropriate to remove certain empowerments bestowed on the Commission to adopt such harmonised conditions by means of implementing acts in cases where those conditions are not met. Regulation (EU) 2024/1689 should therefore be amended to remove the empowerments conferred on the Commission in Article 50(7), Article 56(6), and Article 72(3) thereof to adopt implementing acts. At the same time, recognising the need to offer clarity how providers of high-risk AI systems are required to comply with their monitoring obligations, the Commission should be required to publish guidance on the post-market monitoring plan, including a template with elements to be included therein, by 2 February 2027.
Amendment 24 Proposal for a regulation Recital 23 a (new)
(23a) The parallel application of sectoral Union harmonisation legislation listed in Section A of Annex I to Regulation (EU) 2024/1689 of the European Parliament and of the Council and the requirements set out in that Regulation for high-risk artificial intelligence systems may lead to overlaps of requirements and unnecessary administrative burden for economic operators. Such overlaps could create legal uncertainty, increase compliance costs and potentially lead to competitive disadvantages, without providing additional benefits for the protection of health, safety or fundamental rights. In order to ensure a more coherent and proportionate regulatory framework and to simplify the application of requirements for artificial intelligence systems embedded in products regulated under Union harmonisation legislation, the references to the Union harmonisation legislation currently listed in Section A of Annex I to Regulation (EU) 2024/1689 should therefore be moved to Section B of that Annex. This approach clarifies that artificial intelligence systems integrated into products covered by those sectoral acts are subject to the requirements of this Regulation where relevant, while allowing the conformity assessment procedures and product safety requirements under the respective sectoral legislation to remain the primary framework. Any remaining gaps relating to artificial intelligence systems integrated into such products should be addressed within the relevant sectoral legislation.
Amendment 25 Proposal for a regulation Recital 23 b (new)
(23b) In order to safeguard the horizontal nature of this Regulation and ensure the proper functioning of the internal market, the relevant requirements laid down in Chapter III, Section 2 of this Regulation should be deemed to constitute essential health and safety requirements for high-risk AI systems covered by Union harmonisation legislation listed in Annex I and should be applied in a consistent and coherent manner across those sectoral frameworks. For this purpose, the Commission should be entitled to adopt delegated acts taking into account the requirements set out in Chapter III, Section 2 of this Regulation as regards their application to AI systems falling within its scope as well as relevant harmonised standards. In doing so, the Commission should not go beyond the requirements laid down in Regulation (EU) 2024/1689 for this purpose and should take into account the specific context of sectorial legislation. Before adopting the acts referred to in the first subparagraph, the Commission should conduct open and transparent consultations with relevant stakeholders, including competent authorities, notified bodies, civil society and industry.
Amendment 26 Proposal for a regulation Recital 25 a (new)
(25a) When implementing and enforcing this Regulation, national competent authorities, the AI office and the Commission should take into account the objectives set out in Article 1(1) of Regulation (EU) 2024/1689 and follow the principles of necessity, proportionality, legal certainty and technological neutrality, while at the same time ensuring that unnecessary administrative and compliance burdens are minimised.
Amendment 27 Proposal for a regulation Article 1 – paragraph 1 – point 2 Regulation (EU) 2024/1689 Article 2 – paragraph 2
2. For AI systems classified as high-risk AI systems in accordance with Article 6(1) related to products covered by the Union harmonisation legislation listed in Section B of Annex I, only Article 6(1), Article 60a, Articles 102 to 109 and Articles 111 and 112 shall apply. Article 57 shall apply only in so far as the requirements for high-risk AI systems under this Regulation have been integrated in that Union harmonisation legislation.;
2. For AI systems classified as high-risk AI systems in accordance with Article 6(1) related to products covered by the Union harmonisation legislation listed in Annex I, only Article 6(1), Article 60a, Articles 102 to 109, Articles 110a-110l and Articles 111 and 112 shall apply. Article 57 shall apply only in so far as the requirements for high-risk AI systems under this Regulation have been integrated in that Union harmonisation legislation.;
Amendment 28 Proposal for a regulation Article 1 – paragraph 1 – point 4 Regulation (EU) 2024/1689 Article 4 – paragraph 1
‘The Commission and Member States shall encourage providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, level of education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.;
1. ‘Providers and deployers of AI systems shall take measures to support the improvement of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used. This obligation does not cover any guarantee of a specific level of AI literacy of any individual.;
Amendment 29 Proposal for a regulation Article 1 – paragraph 1 – point 4 Regulation (EU) 2024/1689 Article 4 – paragraph 1a (new)
(1a) The Commission shall issue guidance on the practical implementation of the obligation on providers and deployers of AI systems under paragraph 1.
Amendment 30 Proposal for a regulation Article 1 – paragraph 1 – point 4 Regulation (EU) 2024/1689 Article 4 – paragraph 1b (new)
(1b) The Commission and the Member States shall encourage and support AI literacy in society and among the general population and support, facilitate and complement the efforts of providers and deployers of AI systems, in particular SMEs, for example via the creation of Public Private Partnerships in fulfilling their obligation under paragraph 1.;
Amendment 31 Proposal for a regulation Article 1 – paragraph 1 – point 5 Regulation (EU) 2024/1689 Article 4 a (new) – paragraph 1
1. To the extent necessary to ensure bias detection and correction in relation to high-risk AI systems in accordance with Article 10 (2), points (f) and (g), of this Regulation, providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the safeguards set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable, all the following conditions shall be met in order for such processing to occur:
1. To the extent strictly necessary to ensure bias detection and correction in relation to high-risk AI systems in accordance with Article 10 (2), points (f) and (g), of this Regulation, providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the safeguards set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable, all the following conditions shall be met in order for such processing to occur:
Amendment 32 Proposal for a regulation Article 1 – paragraph 1 – point 5 Regulation (EU) 2024/1689 Article 4 a (new) – paragraph 2
2. Paragraph 1 may apply to providers and deployers of other AI systems and models and deployers of high-risk AI systems where necessary and proportionate if the processing occurs for the purposes set out therein and provided that the conditions set out under the safeguards set out in this paragraph.;
2. Providers and deployers of other AI systems and models and deployers of high-risk AI systems may exceptionally process special categories of personal data to the extent that:
(a) processing is necessary to ensure bias detection and correction in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations; and
(b) all of the conditions and safeguards set out in paragraph 1 are applied.
This paragraph does not create any obligation to conduct such bias detection and correction.’
Amendment 33 Proposal for a regulation Article 1 – paragraph 1 – point 5 a (new) Regulation (EU) 2024/1689 Article 5 – paragraph 1 – subparagraph 1 – point ha (new)
(5a) in Article 5, paragraph 1, subparagraph 1 the following point is added:
(ha) the placing on the market, the putting into service or the use of an AI system that alters, manipulates or artificially generates realistic images or videos so as to depict sexually explicit activities or the intimate parts of an identifiable natural person, without that person’s consent.
This prohibition does not apply to providers or deployers of AI systems who have put in place effective safety measures to prevent the generation of such depictions and to avoid misuse continuously, after the system has been placed, on the market or put into service despite the intention of the provider or deployer.
This prohibition shall not prevent AI providers from developing any capabilities referred to in the first subparagraph.
Amendment 34 Proposal for a regulation Article 1 – paragraph 1 – point 5 b (new) Regulation (EU) 2024/1689 Article 6 – paragraph 1
(5b) Article 6(1) is amended as follows:
1. Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:
"1. Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:
(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;
(a) the AI system is intended to be used as a safety component of a product and whose functioning is necessary to ensure that the product or AI system complies with applicable Union safety requirements, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;
(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.
(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.
"
(Regulation (EU) 2024/1689)
Amendment 35 Proposal for a regulation Article 1 – paragraph 1 – point 5 c (new) Regulation (EU) 2024/1689 Article 6 – paragraph 1 a (new)
(5c) In Article 6, paragraph 1a is added:
‘1a.For the purposes of this Regulation, functionalities intended solely for user assistance, performance optimisation, service efficiency, automation, convenience, or quality control of non-safety-related aspects shall not be regarded as safety functions under this Regulation, where their failure would not directly create risks to health or safety.’
Amendment 36 Proposal for a regulation Article 1 – paragraph 1 – point 6 Regulation (EU) 2024/1689 Article 6 – paragraph 4
(6) in Article 6, paragraph 4 is replaced by the following:
deleted
‘
4. A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Upon request of national competent authorities, the provider shall provide the documentation of the assessment.;
’
Amendment 37 Proposal for a regulation Article 1 – paragraph 1 – point 9 a (new) Regulation (EU) 2024/1689 Article 25 – paragraph 2
(9a) Article 25(2) is replaced by the following:
2. Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the AI system on the market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of this Regulation. That initial provider shall closely cooperate with new providers and shall make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems. This paragraph shall not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation.
"2. Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the AI system on the market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of this Regulation.
That initial provider, as well as providers of general-purpose AI models whose models are integrated into high-risk AI systems, shall closely cooperate with new providers and shall make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems.
This obligation shall include:
(a) the provision of technical documentation sufficient to assess compliance with Article 16 requirements;
(b) the disclosure of known limitations and failure modes that could affect high-risk applications;
(c) the provision of reasonable technical access for testing and validation purposes.
This paragraph shall not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation.’
"
(Regulation (EU) 2024/1689)
Amendment 38 Proposal for a regulation Article 1 – paragraph 1 – point 9 b (new) Regulation (EU) 2024/1689 Article 27 – paragraph 4
(9b) in Article 27, paragraph 4 is replaced by the following:
4. If any of the obligations laid down in this Article is already met through the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 of this Article shall complement that data protection impact assessment. to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 of this Article shall complement that data protection impact assessment.
"4. If any of the obligations laid down in this Article is already met through the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the deployer shall, when conducting the fundamental rights impact assessment referred to in paragraph 1 of this Article include cross references to the relevant sections of that data protection impact assessment or include relevant parts of that data protection impact assessment into the fundamental rights impact assessment.
"
(Regulation 2024/1689)
Amendment 39 Proposal for a regulation Article 1 – paragraph 1 – point 10 Regulation (EU) 2024/1689 Article 28 – paragraph 8 (new)
(10) in Article 28, the following paragraph 8 is added: Notifying authorities designated under this Regulation responsible for AI systems covered by the Union harmonisation legislation listed in Section A of Annex I shall be established, organised and operated in such a way that ensures that the conformity assessment body that applies for designation both under this Regulation and the Union harmonisation legislation listed in Section A of Annex I shall be provided with the possibility to submit a single application and undergo a single assessment procedure to be designated under this Regulation and Union harmonisation legislation listed in Section A of Annex I, where the relevant Union harmonisation legislation provides for such single application and single assessment procedure.
deleted
Amendment 40 Proposal for a regulation Article 1 – paragraph 1 – point 10 Regulation (EU) 2024/1689 Article 28 – paragraph 8 (new) – subparagraph 1
The single application and single assessment procedure referred to in this paragraph shall also be made available to notified bodies already designated under the Union harmonisation legislation listed in Section A of Annex I, when those notified bodies apply for designation under this Regulation, provided that the relevant Union harmonisation legislation provides for such a procedure.
deleted
Amendment 41 Proposal for a regulation Article 1 – paragraph 1 – point 10 Regulation (EU) 2024/1689 Article 28 – paragraph 8 (new) – subparagraph 2
The single application and single assessment procedure shall avoid any unnecessary duplications, build on the existing procedures for designation under the Union harmonisation legislation listed in Section A of Annex I and ensure compliance with the requirements both relating to notified bodies under this Regulation and the relevant Union harmonisation legislation.;
deleted
Amendment 42 Proposal for a regulation Article 1 – paragraph 1 – point 11 Regulation (EU) 2024/1689 Article 29 – paragraph 4 – second subparagraph
Notified bodies, which are designated under any of the Union harmonisation legislation listed in Section A of Annex I and which apply for the single assessment referred to in Article 28(8), shall submit the single application for assessment to the notifying authority designated in accordance with that Union harmonisation legislation.
deleted
Amendment 43 Proposal for a regulation Article 1 – paragraph 1 – point 12 a (new) Regulation (EU) 2024/1689 Article 42 – paragraph 2 a (new)
(12a) In Article 42, the following paragraph is inserted:
'2a.Where an AI system is subject to the requirements of Regulation (EU) 2024/2847 as well as requirements set out in Article 15, and where those high-risk AI systems fulfil the essential cybersecurity requirements set out in Regulation (EU) 2024/2847, they shall be presumed to comply with the cybersecurity requirements set out in Article 15 in so far as those requirements are covered by the EU declaration of conformity or parts thereof issued pursuant to Regulation (EU) 2024/2847.';
Amendment 44 Proposal for a regulation Article 1 – paragraph 1 – point 13 Regulation (EU) 2024/1689 Article 43 – paragraph 3
(13) in Article 43, paragraph 3 is replaced by the following:
deleted
‘
For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the provider of the system shall follow the relevant conformity assessment procedure as required under the relevant Union harmonisation legislation. The requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Assessment of the quality management system set out in Article 17 and Annex VII shall also apply.
For the purposes of that conformity assessment, notified bodies which have been notified under the Union harmonisation legislation listed in Section A of Annex I shall have the power to assess the conformity of high-risk AI systems with the requirements set out in Section 2, provided that the compliance of those notified bodies with the requirements laid down in Article 31(4), (5), (10) and (11) has been assessed in the context of the notification procedure under the relevant Union harmonisation legislation. Without prejudice to Article 28, such notified bodies which have been notified under the Union harmonisation legislation in Section A of Annex I, shall apply for designation in accordance with Section 4 at the latest [18 months from the entry into application of this Regulation].
Where Union harmonisation legislation listed in Section A of Annex I provides the product manufacturer with an option to opt out from a third-party conformity assessment, provided that that manufacturer has applied harmonised standards covering all the relevant requirements, that manufacturer may use that option only if it has also applied harmonised standards or, where applicable, common specifications referred to in Article 41, covering all requirements set out in Section 2 of this Chapter.
Where a high-risk AI system is both covered by the Union harmonisation legislation listed in Section A of Annex I and it falls within one of the categories listed in Annex III, the provider of the system shall follow the relevant conformity assessment procedure as required under the relevant Union harmonisation legislation listed in Section A of Annex I.;
’
Amendment 45 Proposal for a regulation Article 1 – paragraph 1 – point 13 Regulation (EU) 2024/1689 Article 43 – paragraph 4 – subparagraph 3
Where a high-risk AI system is both covered by the Union harmonisation legislation listed in Section A of Annex I and it falls within one of the categories listed in Annex III, the provider of the system shall follow the relevant conformity assessment procedure as required under the relevant Union harmonisation legislation listed in Section A of Annex I.;
deleted
Amendment 46 Proposal for a regulation Article 1 – paragraph 1 – point 14 Regulation (EU) 2024/1689 Article 49 – paragraph 2
(14) in Article 49, paragraph 2 is deleted;
deleted
Amendment 47 Proposal for a regulation Article 1 – paragraph 1 – point 15 Regulation (EU) 2024/1689 Article 50 – paragraph 7
7. The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection, marking and labelling of artificially generated or manipulated content. The Commission may assess whether adherence to those codes of practice is adequate to ensure compliance with the obligation laid down in paragraph 2, in accordance with the procedure laid down in Article 56(6), first subparagraph. If it deems the code is not adequate, the Commission may adopt an implementing act specifying common rules for the implementation of those obligations in accordance with the examination procedure laid down in Article 98(2).;
7. The Commission shall encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection, marking and labelling of artificially generated or manipulated content. The Commission shall assess whether adherence to those codes of practice is adequate to ensure compliance with the obligation laid down in paragraph 2, in accordance with the procedure laid down in Article 56(6), first subparagraph. If it deems the code is not adequate, the Commission may adopt an implementing act specifying common rules for the implementation of those obligations in accordance with the examination procedure laid down in Article 98(2).;
Amendment 48 Proposal for a regulation Article 1 – paragraph 1 – point 16 Regulation (EU) 2024/1689 Article 56 – paragraph 6
6. The Commission and the Board shall regularly monitor and evaluate the achievement of the objectives of the codes of practice by the participants and their contribution to the proper application of this Regulation. The Commission, taking utmost account of the opinion of the Board, shall assess whether the codes of practice cover the obligations provided for in Articles 53 and 55, and shall regularly monitor and evaluate the achievement of their objectives. The Commission shall publish its assessment of the adequacy of the codes of practice.;
6. The Commission and the Board shall regularly monitor and evaluate the achievement of the objectives of the codes of practice by the participants and their contribution to the proper application of this Regulation. The Commission, taking utmost account of the opinion of the Board and other relevant competent authorities, shall assess whether the codes of practice cover the obligations provided for in Articles 53 and 55, and shall regularly monitor and evaluate the achievement of their objectives. The Commission shall publish its assessment of the adequacy of the codes of practice.;
Amendment 49 Proposal for a regulation Article 1 – paragraph 1 – point 17 – point a Regulation (EU) 2024/1689 Article 57 – paragraph 3 a (new)
The AI Office may also establish an AI regulatory sandbox at Union level for AI systems covered by Article 75(1). Such an AI regulatory sandbox shall be implemented in close cooperation with relevant competent authorities, in particular when Union legislation other than this Regulation is supervised in the AI regulatory sandbox, and shall provide priority access to SMEs.;
(3a) The AI Office may also establish an AI regulatory sandbox at Union level for AI systems covered by Article 75(1). Such an AI regulatory sandbox shall be implemented in close cooperation with relevant competent authorities, in particular when Union legislation other than this Regulation is supervised in the AI regulatory sandbox, and shall provide priority access to SMEs, including startups.;
The AI Office shall ensure that, to the extent innovative AI systems referred to in paragraph 5 involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities, the EDPB and those other national or competent authorities are associated with the operation of the AI regulatory sandbox established at Union level and involved in the supervision of those aspects to the extent that they relate to their respective tasks and powers, in accordance with Regulation (EU) 2016/679, Regulation (EU) 2018/1725 and Directive (EU)2018/680.;
Amendment 50 Proposal for a regulation Article 1 – paragraph 1 – point 17 – point b Regulation (EU) 2024/1689 Article 57 – paragraph 5
5. AI regulatory sandboxes established under this Article shall provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the providers or prospective providers and the competent authority, ensuring that appropriate safeguards are in place. Such sandboxes may include testing in real world conditions supervised therein. When applicable, the sandbox plan shall incorporate in a single document the real-world testing plan.;
5. AI regulatory sandboxes established under this Article shall provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the providers or prospective providers and the competent authorities, ensuring that appropriate safeguards are in place. Such sandboxes may include testing in real world conditions supervised therein. When applicable, the sandbox plan shall incorporate in a single document the real-world testing plan.;
Amendment 51 Proposal for a regulation Article 1 – paragraph 1 – point 17 – point e Regulation (EU) 2024/1689 Article 57 – paragraph 14
14. National competent authorities shall coordinate their activities and cooperate within the framework of the Board. They shall support the joint establishment and operation of AI regulatory sandboxes, including in different sectors.;
14. National competent authorities shall coordinate their activities and cooperate within the framework of the Board. They shall support the joint establishment and operation of AI regulatory sandboxes, including in different sectors.;
When discussions are held within the framework of the Board, the European Data Protection Supervisor and the AI office shall, as part of their roles within the Board, also provide their feedback and exchange best practices on matters related to the establishment and operation of AI regulatory sandboxes established under their respective competences.;
Amendment 52 Proposal for a regulation Article 1 – paragraph 1 – point 18 Regulation (EU) 2024/1689 Article 58 – paragraph 1 – point d (new)
(d) the detailed rules applicable to the governance of AI regulatory sandboxes covered under Article 57, including as regards the exercise of the tasks of the competent authorities and the coordination and cooperation at national and EU level.;
(d) the detailed rules applicable to the governance of AI regulatory sandboxes covered under Article 57, including as regards the exercise of the tasks of the competent authorities, the involvement and supervision by the competent data protection authorities and the coordination and cooperation at national and EU level.;
Amendment 53 Proposal for a regulation Article 1 – paragraph 1 – point 19 – point a Regulation (EU) 2024/1689 Article 60 – paragraph 1
Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes may be conducted by providers or prospective providers of high-risk AI systems listed in Annex III or covered by Union harmonisation legislation listed in Section A of Annex I, in accordance with this Article and the real-world testing plan referred to in this Article, without prejudice to the prohibitions under Article 5.;
1. Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes may be conducted by providers or prospective providers of high-risk AI systems listed in Annex III, in accordance with this Article and the real-world testing plan referred to in this Article, without prejudice to the prohibitions under Article 5.;
Amendment 54 Proposal for a regulation Article 1 – paragraph 1 – point 19 – point b Regulation (EU) 2024/1689 Article 60 – paragraph 2
2. Providers or prospective providers may conduct testing of high-risk AI systems referred to in Annex III or covered by Union harmonisation legislation listed in Section A of Annex I in real world conditions at any time before the placing on the market or the putting into service of the AI system on their own or in partnership with one or more deployers or prospective deployers.;
2. Providers or prospective providers may conduct testing of high-risk AI systems referred to in Annex III in real world conditions at any time before the placing on the market or the putting into service of the AI system on their own or in partnership with one or more deployers or prospective deployers.;
Amendment 55 Proposal for a regulation Article 1 – paragraph 1 – point 20 Regulation (EU) 2024/1689 Article 60 a (new) – paragraph 3
3. Member States, the Commission, market surveillance authorities and public authorities responsible for the management and operation of infrastructure and products covered by Union harmonisation legislation listed in Section B of Annex I shall cooperate closely with each other and in good faith, and shall remove any practical obstacles, including on procedural rules providing access to physical public infrastructure, where this is necessary, to successfully implement the voluntary real-world testing agreement and test AI-enabled products covered by Union harmonisation legislation listed in Section B of Annex.
3. Member States, the Commission, andnational competent authorities such as market surveillance authorities and public authorities responsible for the management and operation of infrastructure and products covered by Union harmonisation legislation listed in Section B of Annex I shall cooperate closely with each other and in good faith, and shall remove any practical obstacles, including on procedural rules providing access to physical public infrastructure, where this is necessary, to successfully implement the voluntary real-world testing agreement and test AI-enabled products covered by Union harmonisation legislation listed in Section B of Annex.
Amendment 56 Proposal for a regulation Article 1 – paragraph 1 – point 21 Regulation (EU) 2024/1689 Article 63 – paragraph 1
1. SMEs, including start-ups, may comply with certain elements of the quality management system required by Article 17 in a simplified manner. For that purpose, the Commission shall develop guidelines on the elements of the quality management system which may be complied with in a simplified manner considering the needs of SMEs, without affecting the level of protection or the need for compliance with the requirements in respect of high-risk AI systems.;
1. SMEs, including start-ups, and micro enterprises may comply with certain elements of the quality management system required by Article 17 in a simplified manner. For that purpose, the Commission shall develop guidelines on the elements of the quality management system which may be complied with in a simplified manner considering the needs of SMEs and micro enterprises, without affecting the level of protection or the need for compliance with the requirements in respect of high-risk AI systems.;
Amendment 57 Proposal for a regulation Article 1 – paragraph 1 – point 21 a (new) Regulation (EU) 2024/1689 Article 64 – paragraph 2a (new)
(21a) In Article 64, paragraph 2a is added:
‘(2a)Without prejudice to the budgetary procedure and through existing financial instruments, the AI Office shall be allocated with adequate human, financial and technical resources, and with infrastructure to fulfil their tasks, to effectively perform its duties and exercise its powers in respect of the enforcement of Regulation (EU) 2024/1689. In particular, the AI Office shall have a sufficient number of personnel permanently available with in-depth competences and technical expertise. The AI Board shall assess competence and resource requirements.’
Amendment 58 Proposal for a regulation Article 1 – paragraph 1 – point 22 – point b Regulation (EU) 2024/1689 Article 69 – paragraph 3
(b) paragraph 3 is deleted.
deleted
Amendment 59 Proposal for a regulation Article 1 – paragraph 1 – point 24 Regulation (EU) 2024/1689 Article 72 – paragraph 3
3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt guidance on the post-market monitoring plan.;
3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt guidance on the post-market monitoring plan, including a template with elements to be included by 2 February 2027.;
Amendment 60 Proposal for a regulation Article 1 – paragraph 1 – point 25 – point b Regulation (EU) 2024/1689 Article 75 – paragraph 1
Where an AI system is based on a general-purpose AI model, with the exclusion of AI systems related to products covered by the Union harmonisation legislation listed in Annex I, and that model and that system are developed by the same provider, the AI Office shall be exclusively competent for the supervision and enforcement of that system with the obligations of this Regulation in accordance with the tasks and responsibilities assigned by it to market surveillance authorities. The AI Office shall also be exclusively competent for the supervision and enforcement of the obligations under this Regulation in relation to AI system that constitute or that are integrated into a designated very large online platform or very large online search engine within the meaning of Regulation (EU) 2022/2065.
1. Where an AI system is based on a general-purpose AI model, with the exclusion of AI systems related to products covered by the Union harmonisation legislation listed in Annex I and AI systems referred to in Annex III, point 2, and that model and that system are developed by the same provider or by providers belonging to the same group of undertakings, the AI Office shall have powers to supervise and enforce the obligations of this Regulation in accordance with the tasks and responsibilities assigned by it to market surveillance authorities. The AI Office shall also have powers to supervise and enforce the obligations under this Regulation in relation to AI systems that constitute or that are integrated into a designated very large online platform or very large online search engine within the meaning of Regulation (EU) 2022/2065. Where the Commission has not initiated proceedings for the same infringement, the competent authority of a Member State in which the main establishment of the provider of very large online platform or of very large online search engine is located, or where their legal representative is established, may have the powers to supervise and enforce the obligations under this Regulation. Notwithstanding the first subparagraph, the supervision and enforcement powers of the AI Office, do not include AI systems placed on the market, put into service or used by Union institutions, bodies, offices or agencies, which are under the supervision of the European Data Protection Supervisor pursuant to Article 74(9) of this Regulation.
Amendment 61 Proposal for a regulation Article 1 – paragraph 1 – point 25 – point b Regulation (EU) 2024/1689 Article 75 – paragraph 1 – subparagraph 2
When exercising its tasks of supervision and enforcement under the first subparagraph, the AI Office shall have all the powers of a market surveillance authority provided for in this Section and in Regulation (EU) 2019/1020. The AI Office shall be empowered to take appropriate measures and decisions to adequately exercise its supervisory and enforcement powers. Article 14 of Regulation (EU) 2019/1020 shall apply mutatis mutandis.
When exercising its tasks of supervision and enforcement under the first subparagraph, the AI Office shall have all the powers of a market surveillance authority provided for in this Section and in Regulation (EU) 2019/1020. The AI Office shall take appropriate measures and decisions to adequately exercise its supervisory and enforcement powers. Article 14 of Regulation (EU) 2019/1020 shall apply mutatis mutandis.
Amendment 62 Proposal for a regulation Article 1 – paragraph 1 – point 25 – point b a (new) Regulation (EU) 2024/1689 Article 75 – paragraph –1a (new)
(ba) in Article 75, paragraph -1a is inserted:
‘-1a.In the implementation and enforcement of this Regulation, the AI Office shall promote innovation, competitiveness and the protection of fundamental rights, taking them into consideration in the exercise of their functions. The AI Office shall coordinate closely with the competent data protection authorities designated pursuant to Regulation (EU) 2016/1679 in matters involving the processing of personal data falling within the scope of that Regulation.’
Amendment 63 Proposal for a regulation Article 1 – paragraph 1 – point 25 – point c Regulation (EU) 2024/1689 Article 75 – paragraph – 1c
The Commission shall organise and carry out pre-market conformity assessments and tests of AI systems referred to in paragraph 1 that are classified as high-risk and subject to third-party conformity assessment under Article 43 before such AI systems are placed on the market or put into service. These tests and assessments shall verify that the systems comply with the relevant requirements of this Regulation and may be placed on the market or put into service in the Union in accordance with this Regulation. The Commission may entrust the performance of these tests or assessments to notified bodies designated under this Regulation, in which case the notified body shall act on behalf of the Commission. Article 34(1) and (2) shall apply mutatis mutandis to the Commission when exercising its powers under this paragraph.
1c. The Commission shall, subject to Article 28(8), ensure that pre-market conformity assessments and tests of AI systems referred to in paragraph 1 that are classified as high-risk and subject to third-party conformity assessment under Article 43 are carried out before such AI systems are placed on the market or put into service. These tests and assessments shall verify that the systems comply with the relevant requirements of this Regulation and may be placed on the market or put into service in the Union in accordance with this Regulation. The Commission shall entrust the performance of these tests or assessments to notified bodies designated under this Regulation, in which case the notified body shall act on behalf of the Commission. Article 34(1) and (2) shall apply mutatis mutandis to the Commission when exercising its powers under this paragraph.
Amendment 64 Proposal for a regulation Article 1 – paragraph 1 – point 26 – point b Regulation (EU) 2024/1689 Article 77 – paragraph 1 – point b
1. National public authorities or bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights, including the right to non-discrimination, shall have the power to make a request and access any information or documentation created or maintained from the relevant market surveillance authority under this Regulation in accessible language and format where access to that information or documentation is necessary for effectively fulfilling their mandates within the limits of their jurisdiction.;
1. National public authorities or bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights, including the right to non-discrimination, shall have the power to make a request and access any information or documentation created or maintained from the relevant market surveillance authority under this Regulation in accessible language and machine-readable format by electronic means where access to that information or documentation is necessary for effectively fulfilling their mandates within the limits of their jurisdiction. This paragraph is without prejudice to the tasks, powers and independence of the relevant national public authorities or bodies under their mandates in accordance with Union and national law;
Amendment 65 Proposal for a regulation Article 1 – paragraph 1 – point 26 – point c – introductory part Regulation (EU) 2024/1689 Article 77 – paragraph 1a (new)
(c) the following paragraph 1a and 1b are inserted:
(c) the following paragraph 1a, 1b and 1ba are inserted:
Amendment 66 Proposal for a regulation Article 1 – paragraph 1 – point 26 – point c Regulation (EU) 2024/1689 Article 77 – paragraph 1a (new)
1a. Subject to the conditions specified in this Article, the market surveillance authority shall grant the relevant public authority or body referred to in paragraph 1 access to such information or documentation, including by requesting such information or documentation from the provider or the deployer, where necessary.
1a. Subject to the conditions specified in this Article, the market surveillance authority shall grant the relevant public authority or body referred to in paragraph 1 access to such information or documentation, including by requesting such information or documentation from the provider or the deployer, where necessary and without undue delay.
Amendment 67 Proposal for a regulation Article 1 – paragraph 1 – point 26 – point c Regulation (EU) 2024/1689 Article 77 – paragraph 1b (new)
1b. Market surveillance authorities and public authorities or bodies referred to in paragraph 1 shall cooperate closely and provide each other with mutual assistance necessary for fulfilling their respective mandates, with a view to ensuring coherent application of this Regulation and Union law protecting fundamental rights and streamlining procedures. This shall include, in particular, exchange of information where necessary for the effective supervision or enforcement of this Regulation and the respective other Union legislation.;
1b. Market surveillance authorities and public authorities or bodies referred to in paragraph 1 shall cooperate closely and provide each other with mutual assistance necessary for fulfilling their respective mandates, with a view to ensuring coherent application of this Regulation and Union law protecting fundamental rights and streamlining procedures while respecting their respective competences, tasks, powers and independence. This shall include, in particular, exchange of information where necessary for the effective supervision or enforcement of this Regulation and the respective other Union legislation.;
Amendment 68 Proposal for a regulation Article 1 – paragraph 1 – point 26 – point c Regulation (EU) 2024/1689 Article 77 – paragraph 1b a (new)
1ba. Requests for assistance shall contain all the necessary information, including the purpose of and reasons for the request.
Amendment 69 Proposal for a regulation Article 1 – paragraph 1 – point 28 – introductory part Regulation (EU) 2024/1689 Article 96 – paragraph 1
(28) in Article 96(1), the second subparagraph is replaced by the following:
(28) in Article 96(1), point (a) and the second subparagraph are replaced by the following:
Amendment 70 Proposal for a regulation Article 1 – paragraph 1 – point 28 Regulation (EU) 2024/1689 Article 96 – paragraph 1 – point a
(a) the application of the requirements and obligations referred to in Articles 8 to 15 and in Article 25;
-1. (a) the application of the requirements and obligations referred to in Articles 8 to 15 and in Articles 25 and 26;
Amendment 71 Proposal for a regulation Article 1 – paragraph 1 – point 28 Regulation (EU) 2024/1689 Article 96 – paragraph 1 – subparagraph 1
-1a. in Article 96, paragraph 1, subparagraph 1, the following point is inserted:
‘(fa)the application of the obligations referred to in Article 27, including the possibility to reference or include relevant sections or parts of the data protection impact assessment into the fundamental rights impact assessment pursuant to Article 27(4) of this Regulation, using, where relevant, standardised templates. ’
Amendment 72 Proposal for a regulation Article 1 – paragraph 1 – point 29 – point a a (new) Regulation (EU) 2024/1689 Article 99 – paragraph 4 – point da (new)
(aa) in paragraph 4 the following point (da) is inserted:
‘(da)obligations of providers and third parties, including providers of general purpose AI models, pursuant to Article 25(2), (3) and (4); ’
Amendment 73 Proposal for a regulation Article 1 – paragraph 1 – point 29 – point b Regulation (EU) 2024/1689 Article 99 – paragraph 6
6. In the case of SMCs and SMEs, including start-ups, each fine referred to in this Article shall be up to the percentages or amount referred to in paragraphs 3, 4 and 5, whichever thereof is lower.;
6. In the case of and SMEs, including start-ups, each fine referred to in this Article shall be up to the percentages or amount referred to in paragraphs 3, 4 and 5, whichever thereof is lower.;
Amendment 74 Proposal for a regulation Article 1 – paragraph 1 – point 29 – point b Regulation (EU) 2024/1689 Article 99 – paragraph 6 a (new)
6a. In Article 99, paragraph 6a is inserted:
‘In the case of SMCs, with the exception of providers of general-purpose AI models with systemic risk, each fine referred to in this Article shall be up to the percentages or amount referred to in paragraphs 4 and 5, whichever is lower.’
Amendment 75 Proposal for a regulation Article 1 – paragraph 1 – point 29 a (new) Regulation (EU) 2024/1689 Article 110 a (new)
(29a) The following articles: Article 110a – Article 110l are inserted:
Article 110a
Amendment to Regulation (EU) 2023/1230
In Article 8 of Regulation (EU) 2023/1230, the following paragraphs 2 and 3 are added:
‘2.The Commission is empowered to adopt delegated acts in accordance with Article 48 to amend the essential health and safety requirements set out in Annex III in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
3. When adopting delegated acts pursuant to paragraph 2 of this Article or Common Specifications pursuant to Article 20 of this Regulation concerning machinery and related products that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Article 110b
Amendment to Regulation (EU) 2025/2509
In Article 5 of Regulation (EU) 2025/2509, the following paragraphs 4 and 5 are added:
‘4.The Commission is empowered to adopt delegated acts in accordance with Article 53 to amend the essential safety requirements set out in Annex II in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
5. When adopting delegated acts pursuant to paragraph 4 of this Article or Common Specifications pursuant to Article 16 of this Regulation concerning toys that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Article 110c
Amendment to Directive 2013/53/EU
In Article 4 of Directive 2013/53/EU, the following paragraphs 3 and 4 are added:
‘3.The Commission is empowered to adopt delegated acts in accordance with Article 50 to amend the essential requirements set out in Annex I in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
4. When adopting delegated acts pursuant to paragraph 3 of this Article or Common Specifications pursuant to Article 14a of this Regulation concerning products that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Article 110d
Amendment to Directive 2014/33/EU
In Article 5 of Directive 2014/33/EU, the following paragraphs 3 and 4 are added:
‘3.The Commission is empowered to adopt delegated acts in accordance with Article 42 to amend the essential health and safety requirements set out in Annex I in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
4. When adopting delegated acts pursuant to paragraph 3 of this Article or Common Specifications pursuant to Article 14a of this Regulation concerning lifts and safety components for lifts that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Article 110e Amendment to Directive 2014/34/EU
In Article 4 of Directive 2014/34/EU, the following paragraphs 2 and 3 are added:
‘2.The Commission is empowered to adopt delegated acts in accordance with Article 39 to amend the essential health and safety requirements set out in Annex II in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
3. When adopting delegated acts pursuant to paragraph 2 of this Article or Common Specifications pursuant to Article 12a of this Regulation concerning products that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Article 110f
Amendment to Directive 2014/53/EU
In Article 3 of Directive 2014/53/EU, the following paragraphs 5 and 6 are added:
‘5.The Commission is empowered to adopt delegated acts in accordance with Article 45 to amend the essential requirements in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
6. When adopting delegated acts pursuant to paragraph 5 of this Article or Common Specifications pursuant to Article 16a of this Regulation concerning radio equipment that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Article 110g
Amendment to Directive 2014/68/EU
In Article 4 of Directive 2014/68/EU, the following paragraphs 4 and 5 are added:
‘4.The Commission is empowered to adopt delegated acts in accordance with Article 44 to amend the general safety and performance essential safety requirements set out in Annex I in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
5. When adopting delegated acts pursuant to paragraph 4 of this Article or Common Specifications pursuant to Article 12a of this Regulation concerning products that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Article 110h
Amendment to Regulation (EU) 2016/424
In Article 6 of Regulation (EU) 2016/424, the following paragraphs 2 and 3 are added:
‘2.The Commission is empowered to adopt delegated acts in accordance with Article 44 to amend the essential requirements set out in Annex II in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
3. When adopting delegated acts pursuant to paragraph 2 of this Article or Common Specifications pursuant to Article 12a of this Regulation concerning cableway installations, subsystems and safety components that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Article 110i
Amendment to Regulation (EU) 2016/425
In Article 5 of Regulation (EU) 2016/425, the following paragraphs 2 and 3 are added:
‘2.The Commission is empowered to adopt delegated acts in accordance with Article 44 to amend the essential health and safety requirements set out in Annex II in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
3. When adopting delegated acts pursuant to paragraph 2 of this Article or Common Specifications pursuant to Article 14a of this Regulation concerning personal protective equipment that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Article 110j
Amendment to Regulation (EU) 2016/426
In Article 5 of Regulation (EU) 2016/426, the following paragraphs 2 and 3 are added:
‘2.The Commission is empowered to adopt delegated acts in accordance with Article 41 to amend the essential requirements set out in Annex I in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
3. When adopting delegated acts pursuant to paragraph 2 of this Article or Common Specifications pursuant to Article 13a of this Regulation concerning appliances or fitting that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Article 110k
Amendment to Regulation (EU) 2017/745
In Article 5 of Regulation (EU) 2017/745, the following paragraphs 7 and 8 are added:
‘7.The Commission is empowered to adopt delegated acts in accordance with Article 115 to amend the general safety and performance requirements set out in Annex I in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
8. When adopting implementing acts pursuant to paragraph 6 of this Article, delegated acts pursuant to paragraph 7 of this Article or Common Specifications pursuant to Article 9 of this Regulation concerning devices that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Article 110l
Amendment to Regulation (EU) 2017/746
In Article 5 of Regulation (EU) 2017/746, the following paragraphs 7 and 8 are added:
‘7.The Commission is empowered to adopt delegated acts in accordance with Article 107 to amend the general safety and performance requirements set out in Annex I in order to adapt them to scientific or technical progress or to international developments or to add requirements in relation to emerging risks or technologies. For high-risk AI systems referred to in Article 6(1) of Regulation (EU)2024/1689 the relevant requirements set out in Chapter III, Section 2 of (EU) Regulation 2024/1689 shall be deemed to constitute essential health and safety requirements for the purpose of this Regulation.
8. When adopting implementing acts pursuant to paragraph 6 of this Article, delegated acts pursuant to paragraph 7 of this Article or Common Specifications pursuant to Article 9 of this Regulation concerning devices that are high-risk AI systems as referred to in Article 6(1) of Regulation (EU) 2024/1689 of the European Parliament and of the Council, or that use high-risk AI systems as safety components, the Commission shall take into account the requirements set out in Chapter III, Section 2, of that Regulation as well as relevant harmonised standards. With regard to high-risk AI systems, the Commission shall not go beyond the requirements laid down in Regulation (EU) 2024/1689.’
Amendment 76 Proposal for a regulation Article 1 – paragraph 1 – point 30 – point b Regulation (EU) 2024/1689 Article 111 – paragraph 4 (new)
4. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, that have been placed on the market before 2 August 2026 shall take the necessary steps in order to comply with Article 50(2) by 2 February 2027.;
4. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, that have been placed on the market before 2 August 2026 shall take the necessary steps in order to comply with Article 50(2) by 2 November 2026.;
Amendment 77 Proposal for a regulation Article 1 – paragraph 1 – point 31 – point a Regulation (EU) 2024/1689 Article 113 – paragraph 3 – point d (new)
Chapter III, Sections 1, 2, and 3, shall apply following the adoption of a decision of the Commission confirming that adequate measures in support of compliance with Chapter III are available, from the following dates:
Chapter III, Sections 1, 2, and 3, with the exception of Article 6(5), shall apply;
Amendment 78 Proposal for a regulation Article 1 – paragraph 1 – point 31 – point a Regulation (EU) 2024/1689 Article 113 – paragraph 3 – point d – point i (new)
(i) 6 months after the adoption of that decision as regards AI systems classified as high-risk pursuant to Article 6(2) and Annex III, and
deleted
Amendment 79 Proposal for a regulation Article 1 – paragraph 1 – point 31 – point a Regulation (EU) 2024/1689 Article 113 – paragraph 3 – point d – point ii (new)
(ii) 12 months after the adoption of the decision as regards AI systems classified as high-risk pursuant to Article 6(1) and Annex I.
deleted
Amendment 80 Proposal for a regulation Article 1 – paragraph 1 – point 31 – point a Regulation (EU) 2024/1689 Article 113 – paragraph 3 – point d – subparagraph 1 – introductory part
In the absence of the adoption of the decision within the meaning of subparagraph 1, or where the dates below are earlier than those that follow the adoption of that decision, Chapter III, Sections 1, 2, and 3, shall apply:
deleted
Amendment 81 Proposal for a regulation Article 1 – paragraph 1 – point 31 a (new) Regulation (EU) 2024/1689 Annex I – Section A
(31 a) In Annex I, Section A is deleted
Amendment 82 Proposal for a regulation Article 1 – paragraph 1 – point 31 b (new) Regulation (EU) 2024/1689 Annex I – Section B – point 20 a (new)
(31b) In Annex I, Section B, the following points are added:
‘20a.Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24);
20b. Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys (OJ L 170, 30.6.2009, p. 1);
20c. Directive 2013/53/EU of the European Parliament and of the Council of 20 November 2013 on recreational craft and personal watercraft and repealing Directive 94/25/EC (OJ L 354, 28.12.2013, p. 90);
20d. Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to lifts and safety components for lifts (OJ L 96, 29.3.2014, p. 251);
20e. Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to equipment and protective systems intended for use in potentially explosive atmospheres (OJ L 96, 29.3.2014, p. 309);
20f. Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC (OJ L 153, 22.5.2014, p. 62);
20g. Directive 2014/68/EU of the European Parliament and of the Council of 15 May 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of pressure equipment (OJ L 189, 27.6.2014, p. 164);
20h. Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016 on cableway installations and repealing Directive 2000/9/EC (OJ L 81, 31.3.2016, p. 1);
20i. Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective equipment and repealing Council Directive 89/686/EEC (OJ L 81, 31.3.2016, p. 51);
20j. Regulation (EU) 2016/426 of the European Parliament and of the Council of 9 March 2016 on appliances burning gaseous fuels and repealing Directive 2009/142/EC (OJ L 81, 31.3.2016, p. 99);
20k. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1);
20l. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).
20m. Regulation (EU) 2023/1230 of the European Parliament and of the Council of 14 June 2023 on machinery and repealing Directive 2006/42/EC of the European Parliament and of the Council Directive 73/361/EEC.’
Amendment 83 Proposal for a regulation Article 1 – paragraph 1 – point 32 Regulation (EU) 2024/1689 Annex VIII – section B
(32) in Annex VIII, section B is deleted;
(32) in Annex VIII, section B, points 7 and 9 are deleted;
The matter was referred back for interinstitutional negotiations to the committees responsible, pursuant to Rule 60(4), fourth subparagraph (A10-0073/2026).