Generative AI and watermarking

Briefing 13-12-2023

Generative artificial intelligence (AI) has the potential to transform industries and society by boosting innovation, empowering individuals and increasing productivity. One of the drawbacks of the adoption of this technology, however, is that it is becoming increasingly difficult to differentiate human-generated content from synthetic content generated by AI, potentially enabling illegal and harmful conduct. Policymakers around the globe are therefore pondering how to design and implement watermarking techniques to ensure a trustworthy AI environment. China has already taken steps to ban AI-generated images without watermarks. The US administration has been tasked with developing effective labelling and content provenance mechanisms so that end users are able to determine when content is generated using AI and when it is not. The G7 has asked companies to develop and deploy reliable content authentication and provenance mechanisms, such as watermarking, to enable users to identify AI-generated content. The EU's new AI act, provisionally agreed in December 2023, places a number of obligations on providers and users of AI systems to enable the detection and tracing of AI-generated content. Implementation of these obligations will likely require use of watermarking techniques. Current state-of-the-art AI watermarking techniques display strong technical limitations and drawbacks, however, in terms of technical implementation, accuracy and robustness. Generative AI developers and policymakers now face a number of issues, including how to ensure the development of robust watermarking tools and how to foster watermarking standardisation and implementation rules.