Google collaborated on C2PA version 2.1
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Google AI
Relevant to AI governance and deployment safety discussions around deepfakes and synthetic media; C2PA is an industry standard increasingly referenced in AI transparency and disinformation policy contexts.
Metadata
Summary
Google describes its collaboration on C2PA (Coalition for Content Provenance and Authenticity) version 2.1, a technical standard aimed at embedding provenance metadata into content to help people understand how AI-generated or modified content was created. The initiative pairs with Google's SynthID watermarking tool as part of a broader industry effort to increase transparency around generative AI content. This represents an industry-level coordination effort on content authenticity standards.
Key Points
- •Google contributed to developing C2PA v2.1, a provenance standard that embeds metadata tracking how content was created and modified over time.
- •The initiative complements Google's SynthID watermarking technology, forming a layered approach to AI content identification.
- •C2PA is an industry-wide coalition effort, highlighting the importance of cross-company coordination for content authenticity standards.
- •The goal is to give consumers reliable signals about whether content is AI-generated or has been AI-modified.
- •Google frames content provenance as a trust and safety issue, with VP of Trust & Safety authoring the post.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Deepfake Detection | Approach | 91.0 |
| AI-Era Epistemic Infrastructure | Approach | 59.0 |
Cached Content Preview
How Google and the C2PA are increasing transparency for gen AI content
How we’re increasing transparency for gen AI content with the C2PA
Sep 17, 2024
·
Share
x.com
Facebook
LinkedIn
Mail
Copy link
We’re helping to develop the latest technology to help people better understand how a particular piece of content was created and modified over time.
Laurie Richardson
Vice President, Trust & Safety
Share
x.com
Facebook
LinkedIn
Mail
Copy link
As we continue to bring AI to more products and services to help fuel creativity and productivity, we are focused on helping people better understand how a particular piece of content was created and modified over time. We believe it’s crucial that people have access to this information and we are investing heavily in tools and innovative solutions, like SynthID , to provide it.
We also know that partnering with others in the industry is essential to increase overall transparency online as content travels between platforms. That’s why, earlier this year, we joined the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member .
Today, we’re sharing updates on how we’re helping to develop the latest C2PA provenance technology and bring it to our products.
Advancing existing technology to create more secure credentials
Provenance technology can help explain whether a photo was taken with a camera, edited by software or produced by generative AI. This kind of information helps our users make more informed decisions about the content they’re engaging with — including photos, videos and audio — and builds media literacy and trust.
In joining the C2PA as a steering committee member, we’ve worked alongside the other members to develop and advance the technology used to attach provenance information to content. Through the first half of this year, Google collaborated on the newest version (2.1) of the technical standard, Content Credentials . This version is more secure against a wider range of tampering attacks due to stricter technical requirements for validating the history of the content’s provenance. Strengthening the protections against these types of attacks helps to ensure the data attached is not altered or mis
... (truncated, 7 KB total)65e0dc3fa94950bb | Stable ID: sid_PsPi9VzDMw