Index 
 Zurück 
 Vor 
 Vollständiger Text 
Verfahren : 2018/2752(RSP)
Werdegang im Plenum
Entwicklungsstadien in Bezug auf das Dokument :

Eingereichte Texte :

RC-B8-0308/2018

Aussprachen :

PV 11/09/2018 - 15
CRE 11/09/2018 - 15

Abstimmungen :

PV 12/09/2018 - 6.8
Erklärungen zur Abstimmung

Angenommene Texte :

P8_TA(2018)0341

Ausführliche Sitzungsberichte
Dienstag, 11. September 2018 - Straßburg Überprüfte Ausgabe

15. Autonome Waffensysteme (Aussprache)
Video der Beiträge
Protokoll
MPphoto
 

  Der Präsident. – Als nächster Punkt der Tagesordnung folgt die Aussprache über die Erklärung der Vizepräsidentin der Kommission und Hohen Vertreterin der Union für Außen- und Sicherheitspolitik zum Thema „Autonome Waffensysteme“ (2018/2752(RSP)).

 
  
MPphoto
 

  Federica Mogherini, Vice-President of the Commission / High Representative of the Union for Foreign Affairs and Security Policy. – Mr President, thank you for putting artificial intelligence on the agenda. I know that this might look like a debate about some distant future or about science fiction: it’s not. Artificial intelligence is already part of our daily life – when we use our smartphone or when we watch a TV series, we understand that very well – and it is now starting to be weaponised and to impact on our collective security. So it makes a lot of sense to have this debate here today.

We are entering a world where drones could fire and could kill with no need for a man to pull the trigger. Artificial intelligence could take decisions on life and death, with no direct control from a human being. The reason why we are here today is not that we are afraid of technology, let me start by saying that. Human ingenuity and technological progress have made our lives easier and more comfortable. The point is that scientists and researchers should and must be free to do their job, knowing that their discoveries will not be used to harm innocent people.

After World War II, a large number of nuclear scientists started to oppose nuclear weapons so that research could focus on the peaceful applications of nuclear energy, and I think that today we are witnessing something very similar: scientists and artificial intelligence pioneers are warning us of the dangers ahead. Some of them are refusing to work for the military.

I believe the best way ahead is to agree on some common principles regarding the military use of artificial intelligence, and to define the boundaries of its applications so that, within those limits, scientists are free to explore the immense positive potential of artificial intelligence. This is a core objective in the Commission’s communication on artificial intelligence and in the follow-up work that will also cover security matters.

At the beginning of this month, the United Nations Group of Governmental Experts on Lethal Autonomous Weapons Systems agreed on an initial set of possible guiding principles. This is the first step, after a number of failures, towards a shared approach. It is a good starting point, and the new guiding principles are very much in line with the positions we have developed inside the European Union under European External Action Service coordination. Let me say that this is one of the points on the agenda where I could easily sit on both sides of the Chamber, because there is work being done on the Commission side and work being done on the Council side, as well as under the EEAS leadership.

The group of experts stresses that international humanitarian law applies to all weapon systems, both old and new, and that all weapons must always remain under human control. The experts have agreed that the UN Convention on Certain Conventional Weapons is the appropriate framework to regulate weapons of this kind and that any policy measure must not interfere with the civilian uses of artificial intelligence. This is only the first stage of the discussion and there is no agreement yet on any regulation, so work will continue within the Group of Governmental Experts during the course of next year.

I believe that we Europeans have an important contribution to bring to this table. Our Member States, it’s true, hold different views on some issues, but we all agree that the use of force must always abide by international law, including international humanitarian law and human rights law, and this fully applies to autonomous weapons systems.

States and human beings remain responsible and accountable for their behaviour in an armed conflict, even if it involves the use of autonomous weapons, and this is why our position at the UN has been that humans should always make the decisions on the use of lethal force and always exert sufficient control over lethal weapon systems.

Of course, we do not have all the answers or all the solutions, and, partly for that reason, I decided a few months ago to set up a panel with tech leaders from different backgrounds and fields of expertise. We had the first meeting in Brussels in June, together with me and all of them. We’ve started to carry on a conversation between the tech world and the foreign and security policy community, and my intention as High Representative is to put this issue on the table for the defence ministers, too, at one of our next Council meetings: the question of how we can harness the opportunities of the digital era while also addressing the rising threats.

Among the members of this global tech panel are some of the experts on artificial intelligence who have been most vocal on the issue of lethal autonomous weapons. Together with the experts’ community, we can find a solution that is both prudent and innovative. We can continue exploring the immense possibilities of artificial intelligence and, at the same time, guarantee full respect for human rights.

This is a collective responsibility and I’m particularly glad that the European Parliament is leading the way and driving the conversation on this issue, so I am looking forward very much to listening to your views on this extremely important part of our common work.

(Applause)

 
  
MPphoto
 

  Bogdan Andrzej Zdrojewski, w imieniu grupy PPE. – Panie Przewodniczący! Technologie to zawsze trudny temat, zwłaszcza wtedy kiedy musimy objąć technologie regulacjami prawnymi. Wydaje mi się, że przy tym problemie mamy trzy podstawowe tezy, które powinny być wygłoszone w sposób jednoznaczny. Po pierwsze, nie może być regulacji albo życzenia, aby w finale zablokować prace nad autonomicznymi systemami broni. Byłoby to niekorzystne z punktu widzenia, krótko mówiąc, bezpieczeństwa Europy, czy głównie Europejczyków. Po drugie, autonomiczne systemy broni muszą być poddawane kontroli ludzi i o to Parlament będzie konsekwentnie występować. Po trzecie, niezbędne jest zbudowanie takich ram prawnych, aby nie było wątpliwości co do nie tylko użycia, ale także odpowiedzialności za użycie systemów autonomicznych. Aby te warunki mogły być spełnione muszą być zrealizowane trzy cele: po pierwsze – dobra definicja autonomicznych systemów broni – mamy to. Po drugie – odpowiedzialność człowieka i po trzecie... (Przewodniczący odebrał mówcy głos).

 
  
MPphoto
 

  Ana Gomes, em nome do Grupo S&D. – Senhor Presidente, nós apoiamos e agradecemos a sua posição, Sra. Mogherini, e as suas iniciativas, designadamente a convocação desse painel de peritos para iniciar uma conversa a nível que não é só técnico, mas é também político.

Nós queremos mesmo mais nesta resolução que é apoiada por diversas fações políticas neste Parlamento: nós queremos avançar para uma posição comum vinculativa, que claramente proíba o desenvolvimento de armas, de sistemas de armas letais autónomos, portanto sem intervenção humana, portanto na base da chamada inteligência artificial.

Isto é tão prioritário quanto nós queremos que haja uma posição única, clara, na próxima reunião das Altas Partes Contratantes da Convenção sobre certas armas convencionais que vai ter lugar em novembro, em Genebra, onde obviamente a Senhora, os seus serviços devem falar por toda a União. E é muito importante, de facto, que essa mensagem seja clara para todos os sectores, para a indústria, para a ciência, para a tecnologia, para os estabelecimentos civis, militares, de segurança e políticos.

É que, de facto, este tipo de armas sem intervenção humana não são aceitáveis, podem pôr em causa o futuro da humanidade e esta conversa tem que chegar designadamente àqueles Estados-Membros que têm sido relutantes em embarcar num processo concertado ao nível europeu, designadamente o Reino Unido e a França, que têm o projeto, respetivamente o projeto TIRANIS e o projeto NEURON que, obviamente, deviam ser controlados politicamente ao nível europeu pelo Serviço Europeu para a Ação Externa.

 
  
MPphoto
 

  Anna Elżbieta Fotyga, w imieniu grupy ECR. – Panie Przewodniczący! Podczas niedawnego posiedzenia SEDE uznani jej eksperci niezależni, przedstawiciele NATO poinformowali nas, że żadna z demokracji zachodnich nie dysponuje śmiercionośną bronią, która byłaby w pełni autonomiczna, pozbawiona nawet zdalnej kontroli. Postulując zakaz, musimy sobie zdawać sprawę, że takie uregulowania może być również narzędziem pewnej międzynarodowej manipulacji, a mianowicie zagrożenie, które przewidujemy, teoretycznie dotyczy przede wszystkim reżimów autorytarnych, które z natury rzeczy raczej nie poddają się nakazom prawa międzynarodowego.

 
  
MPphoto
 

  Norica Nicolai, în numele grupului ALDE. – Domnule președinte, doamnă comisar, salutăm inițiativele dumneavoastră într-un domeniu pe care îl considerăm de viitor și deosebit de important pentru viitorul umanității. Inteligența artificială este o realitate din ce în ce mai prezentă, din ce în ce mai dezvoltată, dar fără un cadru reglementat, ea riscă să constituie un atentat la adresa securității și stabilității internaționale și la adresa indivizilor.

Trebuie ca în utilizarea acestor tipuri de armamente letale autonome să respectăm principiile dreptului internațional, principiile dreptului umanitar și inclusiv valorile noastre, pe care le protejăm la nivel european. Nu poate fi acceptat să lăsăm loc ca aceste inteligențe artificiale să decidă dreptul la viață sau la moarte al unei persoane. În acest context, lipsa oricărei răspunderi morale, etice și chiar juridice ar putea genera o situație de haos, de abuz și o situație de insecuritate globală.

De aceea, această rezoluție vă solicită să continuați activitățile pe care le-ați început, până vom avea o poziție comună în acest domeniu, o poziție care este crucială și, subliniez, doamnă vicepreședintă a Comisiei Europene, în deplin acord cu dreptul nostru european și cu valorile pe care le protejăm.

 
  
MPphoto
 

  Reinhard Bütikofer, on behalf of the Verts/ALE Group. – Mr President, as the EU is being called upon to shoulder more responsibility as an actor in the security field, it is also very important that on all the issues that we are confronted with we clearly make visible to our citizens and everybody else that we will do that according to our own principles, and that is why we need a ban on killer robots.

We believe that the use of these kinds of weapons, which select and attack a target without meaningful human control, would dramatically change the world we live in, and it is quite obvious that the use of such weapons, if they existed, could most probably not be limited just to state actors. Therefore, we want the EU to come up with a common position in November in favour of a legally binding ban on killer robots.

 
  
MPphoto
 

  Sabine Lösing, im Namen der GUE/NGL-Fraktion. – Herr Präsident! Schon 2014 hatte dieses Parlament einen gemeinsamen Entschließungsantrag angenommen, in dem gefordert wird, die Entwicklung, Produktion und Verwendung von vollkommen autonomen Waffen zu verbieten. Wie konnte es da sein, dass die Verhandlungsführerin desselben Hauses, desselben Parlaments, diesen Mehrheitswillen missachtet und nun die Verhandlungen zwischen Rat, EP und Kommission das Ergebnis brachten, dass autonome Waffensysteme im EU-Rüstungsprogramm mit europäischen Steuergeldern gefördert werden? Dabei wurde sogar der vom EP beschlossene Rechtstext zum EDDP, in dem die Finanzierung und Entwicklung autonomer Waffensysteme ausgeschlossen wird, missachtet. Das ist ein Skandal und muss rückgängig gemacht werden.

Umso wichtiger ist nun die zur Abstimmung stehende Entschließung, in der unmissverständlich das internationale Verbot der Entwicklung und der Produktion von tödlichen autonomen Waffensystemen gefordert wird. Es gibt moralisch und völkerrechtlich keinerlei Legitimation für die Waffen – Waffen, die Menschen töten ohne Gerichtsbeschluss, gezielt und heimtückisch. Eine Waffenmaschine trifft Entscheidungen über das Leben oder den Tod von Menschen? Die Hemmschwellen gewaltsamer Intervention werden weiterhin sinken. Es wird ein neues Wettrüsten in Gang gesetzt und ein Bombengeschäft für die Waffenindustrie werden.

Wer Menschenrechte und Humanismus verteidigt, kann nur ein vollständiges Verbot dieser Waffensysteme fordern.

 
  
MPphoto
 

  Fabio Massimo Castaldo, a nome del gruppo EFDD. – Signor Presidente, onorevoli colleghi, grazie agli enormi e rapidi progressi compiuti dalla tecnologia militare, dalla robotica e dall'intelligenza artificiale, le armi letali autonome sono destinate a dominare il prossimo scenario militare.

Mi chiedo però se i presunti vantaggi a livello militare e strategico di questi nuovi dispositivi autonomi supereranno davvero i rischi e i pericoli connessi al loro utilizzo. Mi riferisco in particolare alla difficoltà nell'identificare un vero e unico responsabile dei crimini che potrebbero essere commessi contro innocenti, alla loro proliferazione incontrollata, senza dimenticare il rischio di errori o ancora peggio di hackeraggio, per non parlare dell'incapacità di gestire tutti gli imprevisti del caso, senza l'intervento umano, nel pieno rispetto delle regole d'ingaggio e del diritto internazionale.

Per questi e altri motivi ritengo necessario cominciare negoziati su uno strumento legale che proibisca o restringa quasi totalmente l'utilizzo di queste armi. Abbiamo il dovere e la responsabilità di tutelare i nostri cittadini, evitando scenari apocalittici dove dei killer robot decidano autonomamente su chi scaricare il loro carico di fuoco. Non può essere un algoritmo o un robot a decidere della vita e della morte di esseri umani.

 
  
MPphoto
 

  Michael Gahler (PPE). – Herr Präsident! Die Entwicklung künstlicher Intelligenz erfasst mehr und mehr Lebensbereiche. Das, was man im zivilen Bereich, im Privatleben vielleicht noch als nützlich empfinden kann, ist, wenn man das weiterdenkt in Richtung dieser Killerroboter, wie sie beschrieben worden sind, natürlich erschreckend. Es muss in der Tat im Ergebnis verhindert werden, dass sich Waffen verselbständigen und auf diese Art und Weise zu einer Gefahr für die Menschheit werden.

Deswegen bin ich einverstanden mit der Vorgehensweise, wie sie auch die Hohe Vertreterin vorgeschlagen hat. Sie hat ja da Experten darangesetzt. Unsere Forderung hier geht auch darauf hinaus, dass sie darauf hinwirken, dass die Mitgliedstaaten und der Rat einen gemeinsamen Standpunkt ausarbeiten und annehmen. Ich glaube, das ist die Art und Weise, wie wir für Rechtssicherheit, aber auch für einen politisch gemeinsamen Kurs bei dieser Herausforderung sorgen und wie wir dafür gemeinsam eine gute Position finden können – auch mit dieser Entschließung.

 
  
MPphoto
 

  Arne Lietz (S&D). – Herr Präsident, sehr geehrte Hohe Vertreterin! Das Scheitern der Genfer Verhandlungen Anfang September zu autonomen Waffensystemen hat einmal mehr gezeigt, wie wichtig es ist, dass die EU hier mit einer Stimme spricht. Auch ich fordere ein international geltendes Verbot von vollständig autonomen Waffen, die selbständig über Leben und Tod entscheiden. Die bereits vorhandene internationale Ächtung von Chemiewaffen sollte uns Warnung und Ansporn zugleich sein. Es darf nicht erst zu Produktion, Export und Nutzung von autonomen Waffen kommen, die ohne Menschenentscheidung töten und ebenfalls außer Kontrolle geraten können. Das fordert auch die SPD im Europäischen Parlament in ihrem kürzlich verabschiedeten Kernthesenpapier zur europäischen Sicherheits- und Verteidigungspolitik.

Die bereits begonnene Entwicklung autonomer Waffensysteme muss deshalb dringend gestoppt werden. Wir müssen verhindern, dass Technologien auf die Märkte kommen, auf die das geltende Menschenrecht nicht anwendbar ist und die bisher unbeantwortete moralische und ethische Probleme aufwerfen. Deshalb begrüße ich auch, dass sich das Europäische Parlament bereits gegen eine Förderung solcher Waffensysteme über den Europäischen Verteidigungsfonds ausspricht.

 
  
MPphoto
 

  Eugen Freund (S&D). – Herr Präsident, sehr geehrte Hohe Vertreterin! Wovon sprechen wir heute? Die künstliche Intelligenz ist so weit fortgeschritten, dass die Gefahr besteht, sie könnte uns entgleiten. Das wird im Industriebereich etwa zum Verlust von Arbeitsplätzen führen – was schon schlimm genug ist –, im Bereich autonomer Waffen geht es aber sprichwörtlich um Leben oder Tod. Schon derzeit sind Systeme auf dem Markt, die ankommende Raketen anpeilen, verfolgen und auch zerstören können, ohne dass der Mensch eingreift. Doch diese Waffen sind immerhin vom Menschen programmiert worden. Jetzt aber arbeitet man weiter. Im nächsten Schritt kommen Algorithmen ins Spiel, wo der Mensch absolut keine Rolle mehr spielt. Jetzt entscheidet die autonome Drohne allein, ob ein von ihr erkanntes Ziel zerstört wird oder nicht.

Wir müssen daher dringend entsprechende Regelungen schaffen, die verhindern, dass diese Systeme unkontrolliert eingesetzt werden. Darüber hinaus muss die Europäische Union international darauf drängen, dass bei einem Einsatz jedweder Waffen die moralisch-ethische Verantwortung immer beim Menschen bleibt.

 
  
MPphoto
 

  Neena Gill (S&D). – Mr President, recognising that fully autonomous lethal weapons are not fully operational yet, however the use of artificial intelligence is already playing an important role for military around the world. That is why I welcome that we are debating how we can shape international law and policies in this field, because far too often we are behind the curve.

Given the EU’s ambition to be a global actor for international peace and security, my questions to the High Representative are: how does the Commission and the EEAS define ‘meaningful human control’ over autonomous weapons systems, and what type and what degree of human control? How do they propose we strengthen the precedent in international and disarmament law for banning weapons without human control; and with the intrinsic dual-use nature of emerging technologies, how do we achieve not hampering progress in civilian research and development while having lethal autonomous weapons internationally?

 
  
MPphoto
 

  Federica Mogherini, Vice—President of the Commission / High Representative of the Union for Foreign Affairs and Security Policy. – Mr President, first of all let me thank you, as I said at the beginning, for putting this issue on the agenda, but also for the clear indication of all of you to the need to continue this work and to engage in this dialogue we have started already, in particular with industry and researchers, but also civil society organisations, to first and foremost better understand what we’re talking about, because I think we all have more questions than answers in discussing issues that can have very complicated technical implications, that can themselves have very complicated impacts on the political decisions that need to be taken. So I think that here the core issue is to keep at the centre of the debate human control and human responsibility for whatever affects the lives and deaths of human beings.

The very definition of lethal autonomous weapons is not fully clear yet, and this is why we should define the key element of human control. Sound technical work here is key to getting the political decisions right, also because we are discussing a new frontier, so at the time we are discussing the norms and the regulations and the legal framework for all of this, we need to be sure that we are discussing the right things that will come. This discussion is I believe crucial, even if at the moment there is no agreement at this stage on regulatory measures, which could include a legally—binding instrument or a political declaration. I think that the right approach here is to first of all focus on the content of what we need to achieve and then, I believe, the forum and the consensus on the instrument, I hope, will come.

If the discussion on the forum takes the lead, we might end up with an empty instrument that is not supported by the most relevant actors, and that in itself would be a problem.

So when we talk about non—proliferation and arms control – as I said this is the new frontier – you can count on my full commitment to make sure the European Union, together with the Member States, all of them, works on this in a united, meaningful and substantial manner and, obviously, in doing that we will take into full consideration the positions expressed by this Parliament.

Thank you again for having this conversation going among policymakers and the expert community. I’m sure this will not be the last time we discuss this issue and I am fully committed to continuing this work together.

(Applause)

 
  
MPphoto
 

  Der Präsident. – Gemäß Artikel 123 Absatz 2 der Geschäftsordnung wurden sieben Entschließungsanträge eingereicht.

Die Aussprache ist geschlossen.

Die Abstimmung findet morgen, Mittwoch, 12. September 2018, statt.

 
Letzte Aktualisierung: 6. Dezember 2018Rechtlicher Hinweis - Datenschutzbestimmungen