On Monday afternoon, MEPs from the Internal Market (IMCO) and the Civil Liberties (LIBE) committees and guest experts discussed the Artificial Intelligence Act proposal.
First, the participants discussed the scope and purpose of the AI Act, for example the inclusion of general-purpose AI, and the respective benefits of broader and narrower definitions. Also, they raised the issue of future-proofing the proposed legislation.
Next, MEPs and experts discussed risk assessments, the handling of high-risk AI systems and governance models. The conversation touched upon sandboxes and their impact on conformity assessments, empowering users with the right to information and redress mechanisms, possible risk-mitigating obligations for AI deployers, and protections for fundamental rights.
After the hearing, co-rapporteur Brando Benifei (S&D, IT) said: "Today’s hearing was extremely interesting and provided valuable insights from experts on some key areas of the Regulation that still need reflection, such as the distribution of responsibilities across the AI value chain, the governance structure and its possible models, the criteria defining high-risk AI. Since this is the first world attempt to regulate AI, such reflections are still needed and welcome, and will feed into the work we are currently conducting, together with my co-rapporteur Dragos Tudorache. Our aim is to ensure a clear, human-centric, future-proof legislation on AI".
Co-rapporteur Dragos Tudorache (Renew, RO) said: “One of my main priorities as co-rapporteur on the AI Act is to provide legal clarity and sharpness of language to the text. This will support the twin objectives of safeguarding our values and fundamental rights and of stimulating innovation and economic growth, because it will help the business environment comply with this Regulation. In this light, I believe that the scope of the AI Act needs to be horizontal, with as few exclusions by default as possible, but we need to work to provide more legal clarity on what is prohibited, what is high-risk, and what AI needs more transparency -- the core of the AI Act. Along the same lines, we also need a sharper definition of what we mean by “high-risk AI”, and, together with it, we need a stronger governance framework to assess, quantify, and qualify risk and to enforce the rules.”