Tackling deepfakes in European policy

30-07-2021

The emergence of a new generation of digitally manipulated media – also known as deepfakes – has generated substantial concerns about possible misuse. In response to these concerns, this report assesses the technical, societal and regulatory aspects of deepfakes. The rapid development and spread of deepfakes is taking place within the wider context of a changing media system. An assessment of the risks associated with deepfakes shows that they can be psychological, financial and societal in nature, and their impacts can range from the individual to the societal level. The report identifies five dimensions of the deepfake lifecycle that policy-makers could take into account to prevent and address the adverse impacts of deepfakes. The report includes policy options under each of the five dimensions, which could be incorporated into the AI legislative framework, the digital service act package and beyond. A combination of measures will likely be necessary to limit the risks of deepfakes, while harnessing their potential.

The emergence of a new generation of digitally manipulated media – also known as deepfakes – has generated substantial concerns about possible misuse. In response to these concerns, this report assesses the technical, societal and regulatory aspects of deepfakes. The rapid development and spread of deepfakes is taking place within the wider context of a changing media system. An assessment of the risks associated with deepfakes shows that they can be psychological, financial and societal in nature, and their impacts can range from the individual to the societal level. The report identifies five dimensions of the deepfake lifecycle that policy-makers could take into account to prevent and address the adverse impacts of deepfakes. The report includes policy options under each of the five dimensions, which could be incorporated into the AI legislative framework, the digital service act package and beyond. A combination of measures will likely be necessary to limit the risks of deepfakes, while harnessing their potential.

External author

This study has been written by Mariëtte van Huijstee, Pieter van Boheemen and Djurre Das (Rathenau Institute, The Netherlands), Linda Nierling and Jutta Jahnel (Institute for Technology Assessment and Systems Analysis, Karlsruhe Institute of Technology, Germany), Murat Karaboga (Fraunhofer Institute for Systems and Innovation Research, Germany) and Martin Fatun (Technology Centre of the Academy of Sciences of the Czech Republic - TC ASCR), with the assistance of Linda Kool (Rathenau Institute) and Joost Gerritsen (Legal Beetle), at the request of the Panel for the Future of Science and Technology (STOA) and managed by the Scientific Foresight Unit, within the Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament.