Code of Practice on marking and labelling of AI-generated content
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: European Union
An official EU policy initiative relevant to AI governance researchers tracking regulatory approaches to synthetic media transparency and AI disclosure requirements, complementing the EU AI Act's binding provisions.
Metadata
Summary
This European Commission initiative establishes a voluntary code of practice requiring platforms and AI providers to mark and label AI-generated content, including deepfakes and synthetic media. It aims to improve transparency and help users identify AI-generated text, images, audio, and video online. The code is part of the EU's broader digital strategy and supports compliance with the AI Act and Digital Services Act.
Key Points
- •Establishes voluntary commitments for marking and labelling AI-generated content across text, images, audio, and video modalities.
- •Supports EU regulatory frameworks including the AI Act, which mandates transparency for certain AI-generated content.
- •Targets platforms, AI developers, and content distributors to adopt consistent disclosure and watermarking practices.
- •Addresses disinformation risks by making synthetic content detectable and distinguishable from human-created content.
- •Part of the EU's broader digital strategy to ensure trustworthy and accountable AI deployment in public-facing contexts.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Content Authentication | Approach | 58.0 |
Cached Content Preview
Code of Practice on marking and labelling of AI-generated content | Shaping Europe’s digital future
Skip to main content
Code of Practice on marking and labelling of AI-generated content
This code of practice aims to support compliance with the AI Act transparency obligations related to marking and labelling of AI-generated content.
Marking and labelling of AI-generated content
The obligations under Article 50 of the AI Act (transparency obligations for providers and deployers of generative AI systems) address risks of deception and manipulation, fostering the integrity of the information ecosystem. These transparency obligations pertain to marking and detection of AI generated content and labeling of deep fakes and certain AI generated publications. They complement other rules like those for high-risk AI systems or general-purpose AI models .
To assist with compliance with these transparency obligations, the AI Office has kick started the process of drawing up a code of practice on transparency of AI-generated content. The code will be drafted by independent experts appointed by the AI Office in an inclusive process. Eligible stakeholders will be involved to contribute to draft the code. If approved by the Commission, the final code will be a voluntary tool for providers and deployers of generative AI systems to demonstrate compliance with their respective obligations under Article 50(2) and (4) of the AI Act.
Scope of the working groups
The drafting of the code is centered around 2 working groups, following the structure of the transparency obligations for AI generated content in Article 50.
Working group 1: Providers
Focuses on obligations, requiring providers of generative AI systems to ensure:
Outputs of AI systems (audio, image, video, text) are marked in a machine-readable format and detectable as artificially generated or manipulated.
The employed technical solutions are effective, interoperable, robust, and reliable as far as technically feasible. These must take into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards.
Working group 2: Deployers
Focuses on obligations, requiring deployers of generative AI systems to disclose:
Content that is artificially generated or manipulated, constituting a deep fake (image, audio, or video which resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful).
AI generated/manipulated text publications informing the public on matters of public interest, unless the publication has undergone a process of human review and is subject to editorial responsibility.
Both groups will also consider cross-cutting issues, i
... (truncated, 7 KB total)a9e3e225dba7fdd7 | Stable ID: sid_zCaFf4cT4b