Artificial intelligence, democracy and elections
Artificial intelligence (AI) has become a powerful tool thanks to technological advances, access to large amounts of data, machine learning and increased computing power. The release of ChatGPT at the end of 2022 was a new breakthrough in AI. It demonstrated the vast range of possibilities involved in adapting general-purpose AI to a wide array of tasks and in getting generative AI to generate synthetic content based on prompts entered by the user. In a just a few years' time, a very large share of online content may be generated synthetically. AI is an opportunity to improve the democratic process in our societies. For example, it can help citizens to gain a better understanding of politics and engage more easily in democratic debate. Likewise, politicians can get closer to citizens and eventually represent them more effectively. Such an alignment between citizens and politicians could change the face of electoral campaigns and considerably improve the policymaking process, making it more accurate and efficient. Although concerns over the use of AI in politics have been present since the late 2010s, those related to democracies and the election process in particular have grown with the recent evolution of AI. This emerging technology poses multiple risks to democracies, as it is also a powerful tool for disinformation and misinformation, both of which can trigger tensions resulting in electoral-related conflict and even violence. AI can, for example, generate false information, or spread a bias or opinions that do not represent the public sentiment. Altogether, despite its benefits AI has the potential to affect the democratic process in a negative way. Despite the above risks, AI can prove useful to democracies if proper safeguards are applied. For example, specific tools can be employed to detect the use of AI-generated content and techniques such as watermarking can be used to clearly indicate that content has been generated by AI. The EU is currently adapting its legal framework to address the dangers that come with AI and to promote the use of trustworthy, transparent and accountable AI systems.
Briefing