Go back to the Europarl portal

Choisissez la langue de votre document :

  • bg - български
  • es - español
  • cs - čeština
  • da - dansk
  • de - Deutsch
  • et - eesti keel
  • el - ελληνικά
  • en - English (Selected)
  • fr - français
  • ga - Gaeilge
  • hr - hrvatski
  • it - italiano
  • lv - latviešu valoda
  • lt - lietuvių kalba
  • hu - magyar
  • mt - Malti
  • nl - Nederlands
  • pl - polski
  • pt - português
  • ro - română
  • sk - slovenčina
  • sl - slovenščina
  • fi - suomi
  • sv - svenska
 Index 
 Full text 
Debates
Monday, 10 February 2020 - Strasbourg Revised edition

Automated decision-making processes: Ensuring consumer protection, and free movement of goods and services (debate)
MPphoto
 

  Petra De Sutter, author. – Mr President, the Committee on the Internal Market and Consumer Protection (IMCO) formulated a couple of questions regarding automated decision—making processes in anticipation of the European Commission’s initiatives on artificial intelligence (AI), which will be presented in Parliament next week. For our committee, it is of utmost importance that in any future initiative on the matter, we do guarantee not only the free movement of AI-enabled goods and services, but also the trust of the consumer in these new goods and services because the applications are numerous and encompass virtually all sectors of the internal market. We are witnessing rapid development of AI technology. Consumers are being confronted on a daily basis with systems that use automated decision making, such as virtual assistance and chat—bots on websites, and as is the case with all technological advancements, this provides us with opportunities as well as challenges.

AI and automated decision making offers great potential in terms of innovative and higher quality products and services, but at the same time, various challenges need to be addressed in order to realise this potential. First and foremost, services and goods using AI and automated decision making create the risk of consumers being misled or discriminated against, for example, in relation to differentiated pricing. Will consumers be aware of the fact that the price displayed on their screen was adapted to their estimated purchasing power? The same goes for professional services. If important decisions are automated and thus carried out without sufficient human oversight by highly skilled professionals, are consumers aware that decisions affecting their professional, financial or personal lives are in fact not made by humans, and if they feel wronged, will they be able to demand a human review? Will a human, ultimately, be responsible for final decisions or will they be able to reverse them? So my first question to you is: how is the Commission planning to ensure that consumers are protected from unfair or discriminatory commercial practices or from potential risks entailed by AI—driven professional services?

Secondly, we risk that the existing EU product safety and liability frameworks do not adequately cover new AI—enabled products and services. Do concepts such as defective products and the legal provisions built around these adequately cover situations in which harm is caused by products operating under automated decision making? As is the case with other products and services, we should ensure that businesses and consumers are protected from harm and receive compensation if it occurs. This is important in order to ensure secured free movement throughout the single market. Therefore, our second question is: what initiatives may we expect from the Commission side to ensure that the EU safety and liability frameworks are fit for purpose? In that respect, we should ensure that market—surveillance authorities and other competent authorities possess adequate means and powers to act. This is especially important when competent authorities, businesses and consumers do not have access to clear information on how the decision was taken. How will the Commission ensure greater transparency in this respect?

Lastly, talking about biased or unlawfully obtained data sets, the IMCO Committee stresses the importance of respecting regulations such as GDPR when it comes to the collection of data, but also looking at non-personal data use, barriers and requirements. This is something we can and should address at this stage of the design. We should build in these safeguards – privacy by design. So I want to ask: how is the Commission planning to ensure that only high quality and unbiased data set are used in automated decision—making processes?

Dear Commissioner, as I said at the beginning, AI offers great potential in many aspects of our daily lives and many fields of the internal market, yet it confronts us with many new and unknown risks. We should address these risks in an adequate and timely manner because the trust of the consumer will be crucial for the acceptance of these new technologies in our society and economy. I thank you in advance for your answers on these very topical and societally important questions and I, and with me the whole IMCO Committee, look forward to the presentation of your plans next week.

 
Last updated: 8 April 2020Legal notice - Privacy policy