Go back to the Europarl portal

Choisissez la langue de votre document :

  • bg - български
  • es - español
  • cs - čeština
  • da - dansk
  • de - Deutsch
  • et - eesti keel
  • el - ελληνικά
  • en - English (Selected)
  • fr - français
  • ga - Gaeilge
  • hr - hrvatski
  • it - italiano
  • lv - latviešu valoda
  • lt - lietuvių kalba
  • hu - magyar
  • mt - Malti
  • nl - Nederlands
  • pl - polski
  • pt - português
  • ro - română
  • sk - slovenčina
  • sl - slovenščina
  • fi - suomi
  • sv - svenska
Parliamentary questions
PDF 104kWORD 18k
5 April 2018
Question for written answer P-002008-18
to the Commission
Rule 130
Mady Delvaux (S&D) , Françoise Grossetête (PPE) , Eva Kaili (S&D)

 Subject:  Need for regulations and standards in the field of artificial intelligence
 Answer in writing 

Artificial intelligence constitutes a major turning point for many European industries (automotive, finance, defence, etc.) and for civil society as a whole. It is developing through the use of image recognition and neural networks (deep-learning).

Such networks are, however, ‘black boxes’, for which there is currently no standardised process to ensure the absence of bias or their proper functioning. Such a standardisation is required to enable the use of artificial intelligence in critical systems (drones, cars, etc.).

At present, countries such as the US and China are moving fast to put in place their own standards, which could be detrimental to European citizens and companies, who may enjoy a lower level of protection as a result.

Is the Commission aware of this problem?

How does the Commission intend to defend the interests both of companies developing artificial intelligence systems and of their users (public and private)?

What is the Commission’s view on the balance to be struck between the right to experiment and regulation of the use of artificial intelligence?

Original language of question: FR 
Last updated: 11 April 2018Legal notice