Presidente. – L'ordine del giorno reca la relazione di Petar Vitanov, a nome della commissione per le libertà civili, la giustizia e gli affari interni, sull'intelligenza artificiale nel diritto penale e il suo utilizzo da parte delle autorità di polizia e giudiziarie in ambito penale (2020/2016(INI)) (A9-0232/2021)).
Petar Vitanov, rapporteur. – Mr President, the EU regulatory framework needs to catch up with the technical developments. The use of AI has been growing exponentially and this brings the question as to what we, as co-legislators, are doing to safeguard the fundamental rights of European citizens.
AI is not a product in itself, but it’s a method, it’s a tool, and, as such, it needs to be conditioned to the overarching goal of improving the well-being of our citizens. The technology holds great promise if it’s developed and used in an ethical and trustworthy manner, but at the same time, it implies considerable risks for fundamental rights, democracy and the rule of law.
As co-legislators, we bear enormous responsibility towards European citizens. We need to draw clear red lines for AI—based systems that violate fundamental rights. If we are serious about safeguarding people’s safety and well-being, we need to include in the future legislation that can possibly ban or prohibit applications of AI that are incompatible with fundamental rights. Technical progress should never come at the expense of people’s fundamental rights.
It’s not a question of whether the AI systems have the potential to result in racially biased and discriminatory outcomes. We actually know for sure that this is the case. We see the confirmation of this in the data provided by multiple NGOs. We saw it during the Committee on Civil Liberties, Justice and Home Affairs (LIBE) mission to Washington last year, and, just a couple of weeks ago, we heard it from the UN High Commissioner for Human Rights. And no, AI is not dangerous only when used by autocratic governments. Where the technology is flawed, it is flawed no matter who uses it and for what purposes. The good intention does not justify the means.
There have been numerous cases of people being treated unjustly because of AI, such as being denied social security benefits because of faulty AI tools, or even being arrested because of flawed facial recognition, and somehow I’m not surprised that the victims are always the poor, the immigrants, the coloured or the Eastern Europeans. The American Civil Liberties Union demonstrated to the US Congress in May 2019 that the error rate with facial recognition of coloured people is higher, basically leading to de facto discrimination. They described facial recognition technology as unregulated, dangerous, racially biased and often untested.
Using facial recognition in public areas may interfere with a person’s freedom of opinion and expression simply because of the fact that the protection of group anonymity no longer exists if everyone in the group could potentially be recognised. This could lead to those individuals changing their behaviour, for example by no longer participating in peaceful strikes or demonstrations.
Predictive, profiling and risk-assessment AI and ultimate decision—making systems target individuals and profile them as criminals, resulting in serious criminal justice and civil outcomes and punishments before they have carried out the alleged action for which they are being profiled. I always thought that this could only happen in the movies! In essence, the very purpose of the systems is to undermine the fundamental right to be presumed innocent.
Colleagues, I really hope that we can have a serious debate and I’m looking forward to it, but I’m pretty confident that we will place fundamental rights before technological progress, and even before security, because there cannot be any security without freedom.
Ylva Johansson,Member of the Commission. – Mr President, I would like to thank you for this report and a special thanks to Mr Petar Vitanov for this report. I hear, I know that you are concerned about fundamental rights, and so am I. This summer, gangsters shot down and killed Dutch journalist Peter R. de Vries in cold blood.
An attack on a human being. An attack also on our society and our values, on our fundamental rights: the right to life, freedom of expression, freedom of the media. Police caught the suspects within the hour with only a fragment of the getaway car’s registration number. Using state-of-the-art camera systems, the police forced the car to a standstill on the motorway.
Smart digital technology used in defence of citizens’ and our fundamental rights. Without this technology, these criminals would have quite simply got away. To protect both our people and their rights digital technology is no longer a ‘nice to have’, but a ‘need to have’ for law enforcement.
First, because of the massive amounts of data. In one German state last year, the police seized 3 000 terabytes of data in child sexual abuse investigations alone. They estimated it would take one police officer more than 2 000 years to review, and that’s assuming an officer working eight hours a day looking at one picture every second. A computer processes these images 10 to 20 times faster at least, 24 hours a day, never gets tired, makes fewer mistakes, doesn’t get sick from what they see, does not need therapy.
And time is of the essence. In the EncroChat case, Sky ECC and Trojan Shield, the police captured hundreds of millions of messages with criminals plotting drug deals, violent crimes and even murder. Delays can cost lives.
Second, criminals increasingly use artificial intelligence to commit deception and fraud, cyber attacks and ransomware attacks. We can’t ask the police to bring a knife to a gunfight. We must equip the police with modern tools to fight modern crimes.
Third, we need up-to-date information exchange to fight cross-border crime, and I will address this in the upcoming proposals on a police cooperation code and the update of the Prüm framework.
I know you are concerned about the rights to privacy and data protection, and I must stress that we must both protect security and respect fundamental rights at the same time, that’s essential for the trust of our citizens. We need to demystify technology and explain the strong safeguards that already exist. A balanced approach and strong safeguards should govern law enforcement’s use of technology, anchored in national laws, guarded by data protection authorities, subject to redress mechanisms and parliamentary oversight.
There is oversight on European level by, for example, the European Data Protection Supervisor and the Joint Parliamentary Scrutiny Group on Europol. The Court of Justice of the European Union and the European Court of Human Rights have built up a body of case law that is relevant for the use of technology by law enforcement. Procedural rights are already guaranteed: the right to effective remedy, to a fair trial, the rights of defence and the presumption of innocence.
I also agree that AI applications must fulfil robust legal and technical requirements, in particular when they are used by public authorities. The legal accountability for the eventual harmful effect of such systems must be clearly assigned. And this is why these issues are addressed in the Commission’s proposed AI regulation. We need a European approach to ensure safety and full respect of fundamental rights when artificial intelligence is used.
The proposal recognises AI as a strategic tool for law enforcement, to fight terrorism and organised crime. The regulation will facilitate the use of artificial intelligence in a transparent, reliable and secure way, also for law enforcement authorities, by providing clear rules. And I completely agree with you: there is no room for mass surveillance in our society. Our proposal bans mass social scoring and prohibits live biometric identification in public spaces, with a few very well-defined exceptions.
But the police must be able to use AI and digital technology for high risk cases with a potential adverse impact on fundamental rights. AI must live up to the highest standards. It must be robust, secure and accurate. The quality of data must be exceptional. Its use must not lead to a discriminatory or racist outcome, and it must be subject to human oversight. When artificial intelligence affects people, people must have the final say.
Let me end by saying again that we must protect both security and fundamental rights, and I am convinced that we can, and this is what our citizens want. I hope that you are ready to work together with me to uphold our values and keep our citizens safe.
Marcel Kolaja, rapporteur for the opinion of the Committee on the Internal Market and Consumer Protection. – Mr President, dear Madam Commissioner, this report calls for a ban on facial recognition systems in public space. That’s an important step in fighting against mass surveillance. Unfortunately, amendments have been tabled by a group of Members with the aim of torpedoing the ban and asking for legal means to spy on citizens. I ask you to vote against these amendments.
Just last night, thanks to work by 600 journalists worldwide, we learned about tax avoidance and money laundering committed by billionaires and high profile-politicians. For instance, we learned that Czech Prime Minister Andrej Babiš used offshore companies to buy a castle in France. With mass surveillance, journalists cannot possibly do their work safely. Two journalists were murdered in the Union just this year. With facial recognition in public space, oligarchs would have even more tools in their hands to persecute and oppress journalists. I speak about oligarchs who systematically work on breaches of the rule of law and on dismantling democracy. The Central and Eastern Europeans used to live under the eye of the Big Brother, and we don’t want that to return.
Ангел Джамбазки, докладчик по становището на комисията по правни въпроси. – Г-н Председател, колеги, и аз се присъединявам към благодарностите към колегата докладчик Витанов за работата по тази тема – поздравления колега!
Съгласен съм с вас, че залегналите в доклада му тези за нужна защита на основни права при употреба на изкуствен интелект в сферата на наказателното право и използването на полицията и съдебните органи по наказателноправни въпроси са полезни. Както вече подчертах в становището на комисията по правни въпроси, изкуственият интелект и свързаните с него технологии биха имали множество ползи за намаляване на равнището на престъпност, борба с трафика на хора и сексуалната експлоатация на деца, анализ на данни и т.н.
Но дори когато трябва да ги похвалим, колеги, стигаме до параграф номер 9. Той подчертава – много от използваните понастоящем технологии за идентификация, основани на алгоритми, допускат непропорционално много грешки при идентифицирането и класифицирането и по този начин причиняват вреди на хората, които са в неравностойно положение вследствие на расизъм, лицата, принадлежащи към определени етнически общности, ЛГБТИ и така нататък и така нататък. Колеги, някой от вас предполага, че изкуственият интелект е расист, мизогинист и човек, който мрази ЛГБТИ?
Много често съм чувал в тази зала, че някакви хора отвънка говорили срещу този Съюз и се опитвали да го разрушават. Не, колеги, това не е нужно! Такива параграфи и такива твърдения правят много повече от всички врагове на този Съюз. Замислете се за това!
Tom Vandenkendelaere, on behalf of the PPE Group. – Mr President, AI will shape our future. It will change how we work and live, whether it is in health care, agriculture or, yes, law enforcement. The question isn’t whether we like it or not, the question is how Europe will deal with this change. And one thing is clear: AI is here to stay.
Already today, criminals are shifting their operations. Whether it is in organised crime, terrorism, child porn, money laundering or human trafficking, it happens online. For me, law enforcement authorities must be able to use the full potential of AI to fight criminals. It will allow them to fight criminality faster, more efficiently and in a more targeted way. And yes, that includes facial recognition in public spaces – on the condition that all fundamental rights are guaranteed and that there is no room for bias.
And, colleagues, don’t get me wrong. This does not mean that we want to give police forces carte blanche to do whatever they want. It’s our duty as policymakers to set up a strong legal framework within which they can safely use AI while guaranteeing the safety of our citizens. It’s too easy to argue for moratoria or bans without taking into account the challenges our police officers deal with on the ground. If we really want to be serious about setting up, putting people at the core of trustworthy AI, as we said we would, then it is also about their safety and the benefits AI can bring to better protect ordinary citizens and police officers alike.
How do we do that? It’s simple. Let’s not get trapped in focusing on certain AI applications and tools, but let us assess each use in its specific context against a set of principles and values, and that is what we should be discussing. Proportionality, necessity, limiting the use in time and place, transparent and strong democratic oversight, and prior legal authorisation where necessary. That’s why I think this report falls short of the expectations people have and why my Group presented amendments to it.
Digitalisation of our society is inevitable. We cannot be blind to this new reality. It is our duty, all together here in this House, to find the right balance between the use of new technologies on the one hand and the protection of our fundamental rights on the other hand. We have to remain vigilant, but we should not throw the baby out with the bathwater.
Brando Benifei, a nome del gruppo S&D. – Signor Presidente, onorevoli colleghi, mentre il Parlamento si accinge a esaminare la proposta di regolamento sull'intelligenza artificiale, con questa relazione diamo un messaggio chiaro e poniamo già un importante punto fermo: in Europa non c'è posto, secondo noi, per la sorveglianza biometrica di massa e la sicurezza e il contrasto al crimine non possono avvenire a scapito dei diritti fondamentali dei cittadini.
L'identificazione per mezzo di dati biometrici in luoghi accessibili al pubblico rischia infatti di determinare gravi abusi sul diritto alla vita privata e su altri principi alla base dei nostri sistemi democratici. Lo ha detto il Garante europeo della privacy: questi sistemi avrebbero un impatto negativo diretto sull'esercizio della libertà di espressione, di assemblea, di associazione, fino alla stessa libertà di movimento.
Pensiamo poi a cosa a potrebbe avvenire in luoghi non così attenti alla separazione dei poteri o alle libertà fondamentali, siano essi Stati o città. Il rischio di abuso è troppo forte. Per questo riteniamo che le eccezioni all'articolo 5 sulle pratiche proibite della proposta di regolamento debbano essere eliminate.
Allo stesso modo, tecniche predittive per attività di contrasto portano con sé un gravissimo rischio di discriminazione, oltre alla mancanza di evidenze sulla loro accuratezza, minando una delle basi fondamentali dei nostri ordinamenti democratici, ovvero la presunzione di innocenza.
Nessuna supervisione umana né set di dati senza errori saranno sufficienti ad assicurare che decisioni di questo tipo da parte di sistemi di intelligenza artificiale siano prese rispettando garanzie costituzionali e diritti fondamentali dell'Unione, anche laddove questi processi decisionali fossero reversibili.
A maggior ragione questi sistemi non possono sottostare a una mera autovalutazione di conformità prima di essere immessi sul mercato, come proposto dal regolamento nella sua prima bozza a nostro esame. Una autovalutazione espone infatti a rischi inaccettabili di errori e violazioni, che sarebbero scoperti solo in seguito dalle autorità di vigilanza se ne avranno i mezzi per farlo, e ciò avverrebbe a danni ormai avvenuti, anche irreparabili, per le vite delle persone.
Nell'Unione abbiamo già oggi le leggi più avanzate al mondo sulla protezione dei dati personali. Per noi è un modello, un modello che vogliamo portare nel resto del mondo, e non possiamo permetterci di arretrare neanche di un millimetro da questa impostazione quando ci troviamo a regolamentare l'intelligenza artificiale. Dobbiamo, anche in questo campo, tutelare fino in fondo i diritti dei cittadini. Penso che questo sia il modo con cui possiamo agire per un'Europa che abbia il proprio modello centrato sui diritti umani dell'intelligenza artificiale.
PŘEDSEDNICTVÍ: MARCEL KOLAJA místopředseda
Dragoş Tudorache, on behalf of the Renew Group. – Mr President, dear Commissioner, dear colleagues, the use of artificial intelligence in law enforcement is a political decision, not a technical one. Our duty is to apply our political worldview to determine what are the allowed uses of artificial intelligence and under which conditions. Europe is built on a set of values. They constrain the realm of the possible, dictating what we cannot do. And our values also guide our way into the future, dictating what we can and what we should do.
What we cannot do is to allow the use of technology to lead to a breach of our values. We must only allow AI technologies to be used with straight safeguards and oversight, and we must ensure that human rights are protected throughout.
What we also cannot do is to allow authorities to use technology for mass surveillance, mass social scoring or any type of government control over citizens. We must be doubly cautious in protecting our values when dealing with law enforcement, as law enforcement is the prerogative of the state.
On the other hand, what we can – and should – do is to seek to use AI to reduce the biases and discriminations plaguing our society, including in law enforcement. Technology is a tool. We should invest in it until it is good enough to serve our values. What we also can and should do is ensure law enforcement is competitive and has the best tools at its disposal to fight crime. Fighting crime is also a way to protect our values and should be a top priority for us.
We must therefore strengthen the democratic fibre and resilience of our institutions. And tomorrow’s challenges will not come from the tools themselves but from our ability or inability to use them in accordance with our values.
Kim Van Sparrentak, on behalf of the Verts/ALE Group. – Mr President, without knowing, we are all being tracked, followed and identified on the streets by facial recognition cameras. This is dangerous, intrusive and disproportionate. Imagine waking up one day with the police barging into your house after AI has flagged you as a suspect. Then it’s up to you to prove your innocence. It is you versus the computer. And the myth that a calculation is more ethical than a human is dangerous, especially where decisions impact people’s lives.
So to my colleagues from the EPP: let’s be realistic. AI is not a quick solution to fight crime or terrorism. An AI camera will not detect radicalisation, and automating police work is not a substitute for police funding and community workers. Looking at the US, in New York City and Boston, replacing AI-driven predictive policing with community policing lowered crime rates. And San Francisco and Boston have already banned biometric surveillance in public spaces.
So not only is a ban perfectly feasible, we in the EU are far behind in our ethical AI choices. And if we as Parliament are serious about making the EU a leader in ethical AI and fundamental human rights, let’s ban biometric surveillance in public spaces.
Jean-Lin Lacapelle, au nom du groupe ID. – Monsieur le Président, chers collègues, l’intelligence artificielle est un outil admirable et un formidable potentiel pour nos peuples et nos nations. Mais comme à son habitude, l’Union européenne le gâche de la pire manière, en le transformant en un instrument de lutte idéologique.
Ainsi, en matière policière et judiciaire, là où l’informatique peut permettre des progrès décisifs pour la sécurité de nos concitoyens, notamment de nos enfants, vous en limitez l’usage. Vous affirmez que l’intelligence artificielle reproduirait et même amplifierait les discriminations, ce qui obligerait à lui interdire certaines conclusions et à se montrer aussi aveugle que vous dans la lutte contre la délinquance.
Vous refusez un système intelligent de détection des mensonges aux frontières de l’Union européenne, alors que 80 % des prétendus mineurs isolés sont en fait majeurs et que 70 % des demandes d’asile sont rejetées parce qu’infondées.
Vous affirmez que l’affaire américaine George Floyd, qui ne nous concerne pourtant pas, est la preuve d’un prétendu racisme des forces de police, et exigez des plans nationaux de lutte contre le policier plutôt que contre les voyous.
Nous attendions de ce rapport qu’il parle d’intelligence artificielle, d’efficacité pénale, de sécurité de nos concitoyens et nous n’avons que du laxisme pour les délinquants et des leçons idéologiques pour les forces de l’ordre et les honnêtes citoyens. Puisque l’Europe ne veut pas contrôler sérieusement ses frontières et lutter contre la criminalité, alors les États membres devront reprendre leur destin en main et se prononcer par l’élection ou le référendum sur les questions vitales de la sécurité et de la souveraineté: c’est exactement ce que Marine Le Pen proposera en France en mars 2022.
Eugen Jurzyca, za skupinu ECR. – Vážený pán predsedajúci, rozumiem obavám z toho, že by sme algoritmom prenechali kontrolu nad rozhodnutiami, ktoré môžu vážne ovplyvniť ľudský život. Napríklad preto, že algoritmy môžu robiť chyby. Nesúhlasím ale s tým, aby sme len na základe takýchto obáv plošne zakázali použitie umelej inteligencie všade tam, kde to môže mať právny dosah na jednotlivca, ako to navrhuje táto správa. Mali by sme mať analýzy, ktoré by poctivo porovnávali funkčnosť a efektívnosť ľudského a algoritmického rozhodovania, a rozhodnúť sa pre to lepšie. Už dnes pritom existujú príklady úspešného použitia umelej inteligencie v trestných veciach, ktoré viedli k efektívnejšiemu a spravodlivejšiemu systému. Napríklad reforma väzobného stíhania v štáte New Jersey.
Cornelia Ernst, im Namen der Fraktion The Left. – Herr Präsident! Ich will es ganz klar sagen: Für unsere Fraktion ist biometrische Gesichtserkennung im öffentlichen Raum inakzeptabel. Gesichtserkennung sollte grundsätzlich nur in streng geregelten Fällen in begrenztem Maße Anwendung finden. Die Wahrung der Grundrechte ist Maß eines jeden Rechtsstaates. Wir wollen eben nicht, dass der Abgleich von biometrischen Gesichtsmerkmalen schleichend zum Standardverfahren polizeilicher Arbeit wird. Allein das technisch Machbare, die Erleichterung von Polizeiarbeit rechtfertigt eben nicht einen automatischen Einsatz einer solch grundrechtlich invasiven Technologie.
Und deshalb werden wir auch gegen die Änderungsanträge der EVP-Fraktion stimmen. Wir haben einen starken Bericht auf dem Tisch, der das Diskriminierungspotenzial durch KI deutlich benennt, weil Algorithmen soziale und rassistische Denkmuster befördern können. Die Änderungsanträge der EVP-Fraktion konterkarieren das. Bei biometrischer Gesichtserkennung ist eben nicht nur die Gefahr relevant, dass die Bürgerinnen und Bürger automatisch zum Verdachtsobjekt werden. Es gibt auch genügend Hinweise für Fehler – dass unschuldige Menschen in Schwierigkeiten gebracht werden. Biometrische Gesichtserkennung ist, einmal als Standard akzeptiert, Massenüberwachung und ein gravierender Eingriff in die Privatsphäre. Und genau deshalb muss jede Form von anlassloser Überwachung verboten werden.
Mislav Kolakušić (NI). – Poštovani predsjedavajući, poštovane kolege, poštovani građani, svjedoci smo posljednjih godina kako su se neke dobre ideje i neke ne tako dobre ideje u početku pretvorile u potpunu katastrofu. Ideja zaštite osobnih podataka nas je dovela do toga da danas skoro svaki građanin Europske unije mora svakoj banci svakoj tvrtki s kojom se susretne dati gotovo sve svoje podatke koje nikada prije nije morao davati.
COVID putovnice koje su uvedene da bi se moglo lakše putovati, danas onima koji su potpuno zdravi, a ne posjeduju je, priječe da se koriste zdravstvenim uslugama, da kupuju na benzinskim postajama. Uskoro više neće smjeti kupovati ni hranu.
Sada ova ideja biometrijskog praćenja građana, iako u početku samo u kaznenim postupcima, uvjeren sam da će se odnositi na praćenje svakog građanina u svakom trenutku, a to je nedopustivo.
Jeroen Lenaers (PPE). – Mr President, Commissioner, dear colleagues, new technologies often bring enormous opportunities and benefits. But at the same time, we also see often that they provide new avenues for organised crime. It was true for the internet, it is certainly also true for artificial intelligence and machine learning. And at the same time, these technologies can also help us to have huge potential in helping the 1.5 million police officers in the EU to effectively fight crime.
They can help in identifying criminals on the run. They can help forecasting criminal activity, and they can help us in finding counterfeit goods and currencies. And we need to look at that potential with an open mind and avoid a situation where criminals profit from AI but law enforcement cannot use it to fight them.
Yes, there are risks involved, and good safeguards absolutely need to be in place. AI needs to be transparent and trustworthy, and we need to make sure that using AI in the field of law enforcement will never compromise our values.
But let’s also not be naive. Let’s not make the mistake to only focus on the risks and ignore completely the potential. Several colleagues have said it already: AI is here to stay, and its use will only grow in the coming years. And we only have to look at some countries outside the European Union to see what we should not be doing. We need a balanced approach. We need a European approach, because innovation is in our European DNA, as is our ability to create artificial intelligence in a trustworthy, human-centred and valued-based way. Let that be our European trademark in the world, also for law enforcement applications.
Ibán García Del Blanco (S&D). – Señor presidente, quisiera, en primer lugar, dar las gracias y felicitar al ponente por su informe, que creo que es absolutamente equilibrado y que va en la línea de lo que ya este Parlamento en algunas ocasiones ha planteado. Tenemos que encontrar, efectivamente, un equilibrio entre el riesgo para la protección de los derechos y, desde luego, el desarrollo tecnológico que nos facilite de alguna forma la consecución de objetivos de carácter social.
Pero estamos, eso sí, ante un asunto que afecta directamente a los derechos fundamentales. Varios de ellos están comprometidos en el desarrollo jurídico de esta tecnología, o en el ámbito jurídico que ordene esta tecnología, y, al mismo tiempo, también está en juego la paz social cuando hablamos de cuestiones que afectan a la seguridad y al propio orden establecido y a las reglas por las que nos regimos.
Se trata de que, efectivamente, algunas de esas realidades que nos ha anticipado la ciencia-ficción en el pasado —algunas de ellas son ya una realidad hoy en cuanto al desarrollo tecnológico— no se conviertan en una suerte de distopía en nuestros tiempos de esos planteamientos de películas como Minority Report, donde la policía se anticipa incluso a la comisión del delito o a la intención manifestada porque es capaz de prever mediante tecnologías como estas la posible comisión de un delito; o, sin ir más lejos, que tengamos algún día esos jueces robots que de alguna manera también hubieran hecho las delicias de algunos revolucionarios franceses cuando decían aquello de «la boca que dice la ley», buscando una pretendida imparcialidad en la expresión de la justicia del pueblo.
Pero, al mismo tiempo, tenemos también que evitar que existan trabas para el desarrollo de herramientas que efectivamente nos pueden hacer conseguir algunos de los objetivos que pueden hacer nuestras sociedades mejores; desde luego, en el ámbito de la propia administración de justicia, pero también como herramienta de soporte para nuestras propias fuerzas de seguridad.
No podemos obviar esas posibilidades que da la inteligencia artificial y entorpecer su desarrollo. Por eso quiero también apoyar este informe, que creo que se incardina perfectamente —y esto es algo que quisiera recordar a algunos de mis compañeros— en el planteamiento que aprobó este propio Parlamento hace prácticamente un año —asunto sobre el que yo fui ponente, además—, que hablaba de ética aplicada a tecnologías de inteligencia artificial y que creo que iba en el mismo sentido, tratando de evitar escenarios que no son deseables, pero intentando también al mismo tiempo no entorpecer.
Señorías, este informe no puede abrir completamente las barreras y dejar que cada uno vaya por su lado y, al mismo tiempo, ser tan intrusivo como denuncian otros compañeros. Creo que se trata precisamente de un informe que encuentra el término medio, que plantea cuestiones muy interesantes.
Svenja Hahn (Renew). – Herr Präsident! Innovation durch künstliche Intelligenz ist ein schier unglaublicher Pool für Fortschritt. Auch bei Polizei und Justiz kann es Entlastung bringen und die Arbeitsqualität erheblich verbessern. Der Einsatz von KI in der Strafverfolgung allerdings braucht oft das Label „Hochrisiko für Bürgerrechte“.
Und manches gilt es gänzlich zu verhindern. Massenüberwachung durch automatische Gesichtserkennungssoftware im öffentlichen Raum ist ein No-Go. Deshalb ist es so wichtig, dass der Parlamentsvorschlag eine klare Position pro Bürgerrechte und gegen Gesichtserkennung einnimmt. Menschen und Bürgerrechte sind nicht verhandelbar, besonders nicht beim Einsatz neuer Technologien durch staatliche Stellen. Und es wundert mich kein bisschen, dass es wieder die konservativen Kollegen der EVP sind, die auf biometrische Überwachung drängen. Mit Ihren Änderungsanträgen stellen Sie wiederholt zum x-ten Mal Überwachungsträume – Ihre Überwachungsträume – über den Schutz unserer Grundrechte.
Diese Abstimmung ist auch ein Signal für das geplante Gesetz zu künstlicher Intelligenz, welchen Stellenwert das Europäische Parlament unseren Bürgerrechten in der digitalen Zukunft zumisst. Und ich sage Ihnen eines: Ich werde mich jeden Tag dafür einsetzen, dass das KI-Gesetz unsere Grundrechte in der EU stärkt und nicht untergräbt.
Sabrina Pignedoli (NI). – Signor Presidente, onorevoli colleghi, quando si parla di intelligenza artificiale in uso alle forze dell'ordine non c'è solo la questione del riconoscimento facciale, un tema delicato su cui occorre trovare un equilibrio.
Hacker e gruppi criminali in tutta Europa entrano con troppa facilità nei sistemi informatici di istituzioni pubbliche e aziende private. Hanno colpito per esempio strutture sanitarie in Italia, Francia, Germania e Spagna, mettendo a rischio il loro funzionamento. L'intelligenza artificiale deve diventare uno strumento fondamentale in mano alle forze dell'ordine per contrastare più efficacemente il cybercrime.
Ma non c'è solo la repressione dei fenomeni criminali. Occorre investire molto di più nella prevenzione, rinforzando le difese dei dati personali che altrimenti possono finire nel mercato nero dei dati.
Bisogna creare barriere efficaci contro gli hacker e utilizzare l'intelligenza artificiale come una sorta di infiltrato, grazie a cui le forze dell'ordine possono bloccare possibili attacchi prima che si verifichino. La prevenzione è molto più efficace della cura.
Javier Zarzalejos (PPE). – Señor presidente, no hay duda de que la inteligencia artificial es una tecnología estratégica del siglo XXI y que esta tecnología tiene cabida también en el ámbito de la justicia penal y del cumplimiento de la ley.
Hoy existen formas de delincuencia que solo pueden combatirse con la eficacia que deseamos si ponemos a disposición de las fuerzas de seguridad y de los tribunales herramientas tecnológicas innovadoras. Pensemos en el blanqueo de dinero, en la financiación del terrorismo, en la proliferación de contenidos terroristas en línea, en el tráfico de seres humanos por las mafias de la inmigración o para propósitos de explotación sexual o laboral, o en la proliferación de contenidos de abuso sexual de menores, que exigen un trabajo de identificación de las víctimas y de identificación de los perpetradores y de los lugares donde se han cometido los abusos y que ponen a prueba la resistencia psicológica de los que tienen que hacer este trabajo de identificación.
Es cierto que los algoritmos tienen que mejorar y que sus riesgos exigen rodear su uso de grandes salvaguardias. Pero no soy partidario de prohibiciones absolutas, sino de garantías como las que las enmiendas presentadas por mi grupo establecen en relación con la autorización judicial previa.
Tenemos que ofrecer un entorno que facilite el desarrollo de la inteligencia artificial, también en aquellas áreas que se pueden calificar de «alto riesgo», con las debidas garantías.
Y permítanme que, cuando se habla de ejemplos de otras ciudades o de otros países, les diga que me da la impresión de que los sesgos racistas en estos casos no están precisamente en los algoritmos.
Karen Melchior (Renew). – Mr President, it’s not all algorithms or artificial intelligence that are problematic, but predictive profiling and risk assessment artificial intelligence and automated decision—making systems are weapons of mass destruction. They are as dangerous for our democracy as nuclear bombs are for living creatures and life. They will destroy the fundamental rights of each citizen to be equal before the law and in the eye of our authorities. It is not only a question of getting the technology good enough. We must not allow mass surveillance to strip us of our most fundamental rights as citizens, for example the right to unite in demonstrations in public spaces.
Madam Commissioner, thank you for underlining the need for modern tools for our judicial authorities. But where is the legal framework that will ensure strict safeguards against misuse and strict democratic control and oversight?
Miroslav Radačovský (NI). – Vážený pán predsedajúci, správa pána Vitanova, pokiaľ sa týka využívania umelej inteligencie v trestnom konaní, je dobrá, je vyvážená, je profesionálne spracovaná. Som teda toho názoru, že skôr dobrá spočíva v tom, že poukazuje na pozitíva umelej inteligencie v trestnom konaní, a poukazuje aj na negatíva. Podľa môjho názoru ako dlhoročného sudcu som toho názoru, že je treba byť opatrný pri použití umelej inteligencie v čase rozhodovania na súde. Tam predsa by mal len prevyšovať ten ľudský prvok, pretože rozhodovať o vine a o treste je vždy individuálne a žiadne algoritmy to asi nemôžu na danú situáciu aplikovať. Veď aj tu v tejto miestnosti pán predsedajúci rozhoduje momentálne z času na čas, musí rozhodnúť, a keby tomu tak nebolo, tak namiesto pána predsedajúceho by tu sedela umelá inteligencia a namiesto europoslancov by sedela umelá inteligencia, pretože na základe algoritmov by sme vedeli, ako máme rozhodnúť. Je treba vyvarovať sa možnostiam zneužitia umelej inteligencie pri ochrane ľudských práv a slobôd, ale v princípe a v zásade je umelá inteligencia prínosom pre boj s kriminalitou a je treba ju podporovať, je treba ju rozvíjať.
President. – Thank you. I hope no one wants to replace me with artificial intelligence as well as the MEPs.
Tomislav Sokol (PPE). – Poštovani predsjedavajući, povjerenice, kolegice i kolege, ranija dijagnostika zloćudne bolesti, bolje upravljanje prometom ili pak racionalnije korištenje energentima samo su neki od primjera očite koristi koje primjena umjetne inteligencije donosi. Međutim, ako ona nije primjereno zakonodavno uređena, umjetna inteligencija može ugroziti privatnost pojedinca i dovesti do različitih oblika diskriminacije. Stoga ne čudi podatak da čak 88 posto građana smatra da se njome treba pažljivo upravljati.
Borba protiv sofisticiranih, dobro financiranih i opremljenih terorističkih i kriminalnih skupina u 21. stoljeću ne može se zamisliti bez korištenja umjetne inteligencije. S druge strane, primjer umjetne inteligencije naročito je osjetljiv u području kaznenog prava jer uključuje mogućnost geometrijskog prepoznavanja i algoritamskog donošenja odluka.
Kako bi zaštitila građane od njezine zlouporabe, ali i postala svjetski predvodnik u pametnim tehnologijama, Europska unija treba novi, sveobuhvatan pristup umjetnoj inteligenciji. Takvim pristupom treba zabraniti koncept društvenog vrednovanja kojim se pomoću algoritama može prikupljati široki spektar podataka o građanima i njihovom ponašanju.
S druge strane, uporabu sustava biometrijske identifikacije, odnosno prepoznavanja lica na javnim mjestima za potrebe kaznenog progona u skladu s načelom proporcionalnosti treba ograničiti na situacije kao što su potraga za nestalim djetetom, sprečavanje konkretne i neposredne terorističke prijetnje i otkrivanje, lociranje, identificiranje ili istraga protiv počinitelja teškog kaznenog djela ili osumnjičenika za takvo djelo.
Moramo postići balans između korištenja umjetne inteligencije u svrhu hvatanja kriminalaca s jedne strane i zaštite ljudskih prava s druge strane, ali bez širenja histerije i paranoje čemu danas, nažalost, ovdje svjedočimo.
Fabienne Keller (Renew). – Monsieur le Président, Madame la Commissaire, mes chers collègues, les domaines policiers et judiciaires n’échappent pas aux évolutions technologiques et parmi celles-ci, l’intelligence artificielle est une technologie nouvelle et puissante.
L’utilisation de cet outil s’est révélée être un réel atout dans le cadre de certaines enquêtes criminelles, de la lutte contre le terrorisme ou du contrôle des frontières. Ainsi, dans l’affaire terrifiante des attentats de Paris du 13 novembre 2015, c’est en partie grâce à cette technique de l’intelligence artificielle et de la reconnaissance faciale que les enquêteurs ont pu identifier, localiser et arrêter les terroristes présumés.
Cependant, son utilisation doit bien sûr se faire dans le cadre d’un contrôle strict. En effet, son usage doit être limité et proportionné, et toujours être accompagné de supervision humaine. Il doit y avoir un vrai travail de transparence quant aux technologies utilisées, ainsi qu’un contrôle démocratique et surtout judiciaire dans son utilisation, permettant d’éviter tout biais et d’assurer le respect des droits fondamentaux.
Chers collègues, l’utilisation de l’intelligence artificielle dans les affaires pénales peut être un atout pour les enquêtes criminelles et pour la justice européenne. Ne nous en privons pas, dans le respect des libertés fondamentales.
Maite Pagazaurtundúa (Renew). – Señor presidente, gracias, comisaria Johansson. Que el Parlamento se plantee la aplicación de la inteligencia artificial en el Derecho penal y en asuntos penales resultaba necesario, aunque sea un asunto sensible. Es necesario que busquemos el equilibrio entre seguridad y libertad cuando cambian las circunstancias sociales o tecnológicas, y el hecho es que han cambiado.
No hay soluciones mágicas para asuntos complejos, pero la policía o los jueces deben poder utilizar tecnologías que eviten parte de la impunidad de la ciberdelincuencia más sofisticada o del terrorismo cuando este tiene gravísimos objetivos y cuenta con enormes recursos. Porque la impunidad también constituiría una degradación del derecho a la justicia que tienen la sociedad y, muy específicamente, las víctimas.
Ahora bien, la tecnología es tal, tan potente es, que la habilitación de ciertas medidas, como el reconocimiento facial, solo debería darse bajo control judicial estricto. Es imprescindible que se preserve, en sentido general, la privacidad de las personas e incluso algunas otras garantías. Yo personalmente tengo dudas sobre la moratoria del apartado 27, pero no tengo dudas con respecto a que tenemos que seguir trabajando sobre la base de este informe, que marcará nuestra posición política, y que el objetivo debe ser tener reglas muy afinadas, con grandes, fuertes consensos, muy mayoritarios, de esta Cámara.
Ylva Johansson,Member of the Commission. – Mr President, dear honourable Members, I would like to thank you very much for this debate. Many of you have raised serious concerns on important aspects of artificial intelligence, and I have listened very carefully to your interventions. I will try to answer some of the issues that have been raised in this debate.
First, on mass surveillance. I said it already in the beginning, and I completely agree with all of you that have raised this: there’s no room for mass surveillance in the EU and in our societies. And we have already strong safeguards in place, and the AI proposal from the Commission will add additional ones.
EU data protection rules prohibit, in principle, the processing of biometric data for the purpose of uniquely identifying a natural person, except under very specific conditions. The conditions are clearly laid out in our acquis, the GDPR and the Law Enforcement Directive. This ensures that the use is proportionate and respects the right to data protection and provides for fundamental rights.
Further, the AI Act includes the prohibition of real-time remote biometric identification in publicly-accessible places by law enforcement authorities, with very narrow expectations and strong safeguards. The AI Act follows a risk—based approach: the higher the risk, the stricter the rules are. The majority of AI applications are used for purely administrative purposes like translation systems or workflows and do not pose any concerns and require no regulation intervention, while other-use cases, for example the profiling systems used to detect or investigate a crime, or to identify a person remotely, are qualified as high-risk.
Many of you have also raised the concerns of facial recognition. Facial recognition is of outstanding importance for law enforcement to identify perpetrators of victims of crime, not just in publicly-accessible spaces but also in the online environment. Such systems are in operation, and they save lives.
I have to note that the accuracy of AI technologies is 10 times higher than non-AI technologies, and the overall accuracy of such systems has significantly increased in the last years.
In addition, we should not forget that each potential match must be confirmed by the experts before action is taken. Let me be clear: artificial intelligence is not allowed to make decisions to break into a person’s home or to arrest a person. Of course not. AI is a necessary tool to help human beings in law enforcement to make the right and timely decision. It’s necessary with safeguards: the more intrusive effects, the stronger safeguards are necessary.
I’m very glad that we all agree that we need a European approach and a common European regulation based on our values.
Finally, I would like to say don’t put the protection of fundamental rights in contradiction of protecting human beings and protecting societies. It’s simply not true that we have to choose. We are capable of doing both. To be able to do so, to find the right balance, we need this kind of open, free democratic debate like we have this evening here in Parliament. Let’s continue that debate together to protect both societies, lives and fundamental rights.
Petar Vitanov, rapporteur. – Mr President, I am also happy with this debate, because we agree on two things: the benefits of AI in law enforcement on one hand, and, of course, every single colleague here mentioned human rights, fundamental rights. Well, of course, there are divisions. There are definitely two groups. The first one, to which I belong, says that we keep fundamental rights by not letting the use of the unreliable application of AI in order to keep the fundamental rights, and the other group says, and they are convincing us about the conditions of the use of the same unreliable applications in their intentions to protect the unconditional human rights. Of course, it’s a political choice.
My choice is simple. I urge you not to reject all of the amendments tabled, because they will significantly alter the spirit of this report. But of course, if you prefer the second option, please try to convince that single mother that works 12 hours a day in a poor neighbourhood because she cannot afford to live in a better neighbourhood, in a fancy neighbourhood, raising her own children, that her children are potential criminals, or try to convince the poor, the coloured, the immigrants, the foreigners that they are potential criminals only because the AI says so. Is this the world that we want to live in? Is this the world that we want for our children? Will we be able to sleep freely at night? To be honest, I cannot.
President. – The debate is closed. The vote will take place on Tuesday, 5 October 2021.
Written statements (Rule 171)
Laura Ferrara (NI), per iscritto. – L'intelligenza artificiale è destinata ad avere un ruolo strategico per un'efficace lotta contro la criminalità e garantire la sicurezza dei cittadini.
Le opportunità e i vantaggi che essa può offrire nella giustizia penale sono indiscutibili sotto l'aspetto della prevenzione, delle indagini, dell'accertamento e perseguimento di reati, purché sia garantita la compatibilità delle tecnologie con il rispetto dei diritti fondamentali e con i principi di necessità e proporzionalità.
Forme di sorveglianza di massa e polizia predittiva attraverso sofisticati algoritmi che giudicano comportamenti sociali o tratti della personalità per anticipare le condotte di determinati soggetti non ci mettono al riparo dal rischio di processi decisionali opachi, o peggio, da teorie lombrosiane di nuova generazione e pericolose violazioni di diritti.
Un quadro normativo comune a livello europeo è indispensabile per agevolare le autorità di contrasto nell'uso di servizi e strumenti di intelligenza artificiale che rispondano a criteri di trasparenza, imparzialità ed equità, evitando pregiudizi e discriminazioni tra persone o gruppi sociali, rischi di violazione della privacy e della dignità umana.
Karol Karski (ECR), na piśmie. – Tempo digitalizacji, rozwoju technologii, a zwłaszcza narzędzi wykorzystujących sztuczną inteligencję od wielu lat wyprzedza tempo prac legislacyjnych w tym obszarze. Z uwagi jednak na stale postępującą cyfryzację w dziedzinie bezpieczeństwa wewnętrznego nie możemy pozwolić sobie na wstrzymanie rozwoju nowych technologii. Stosowanie nowych rozwiązań musi być adekwatne i proporcjonalne przy jednoczesnym zagwarantowaniu, z jednej strony, wysokiego poziomu ochrony praw podstawowych, a z drugiej, warunków do korzystania przez organy ścigania z rozwiązań analitycznych i statystycznych, które efektywnie wpierają realizację celu, jakim jest zapewnienie bezpieczeństwa obywateli.
Nieodzownym elementem każdego narzędzia wykorzystującego sztuczną inteligencję w realizacji zadań w obszarze porządku publicznego, zarządzania granicą, migracją oraz azylem jest nadzór ludzki jako czynnik umacniający zaufanie w obszarze bezpośredniego oddziaływania technologii na prawa obywateli. W kontekście rozpatrywanego sprawozdania należy uwzględnić specyfikę organów ścigania oraz w miarę możliwości zapewnić ich potrzeby, nie wpływając na zmniejszenie ochrony praw podstawowych, których utrzymanie jest priorytetem. Uzasadnione byłoby wydzielenie problematyki wykorzystania narzędzi sztucznej inteligencji w systemach wspierających realizację zadań w obszarze ochrony bezpieczeństwa wewnętrznego z przedmiotowego projektu rozporządzenia o wykorzystaniu sztucznej inteligencji i uregulowanie jej w ramach odrębnego aktu prawnego. Dalszy rozwój efektywnych narzędzi analitycznych, statystycznych jest niezbędny wobec nowych wyzwań i zagrożeń, ale tylko przy zachowaniu wysokiego poziomu ochrony praw podstawowych.