Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Meta AI

Relevant to AI governance and deployment safety discussions around synthetic media provenance; Stable Signature represents a practical technical intervention for watermarking generative AI outputs to support accountability and reduce misuse risks.

Metadata

Importance: 45/100blog postprimary source

Summary

Meta introduces Stable Signature, a technique for invisibly watermarking images generated by AI systems by fine-tuning the decoder of latent diffusion models to embed a hidden signature. This approach allows provenance tracking of AI-generated content without significantly degrading image quality. The method aims to help identify the source of synthetic media and combat misinformation.

Key Points

  • Embeds invisible watermarks directly into latent diffusion model decoders, so all generated images automatically carry a traceable signature.
  • Watermarks persist through common image transformations like cropping, compression, and color adjustments, making them robust in real-world use.
  • Only requires fine-tuning the decoder, not the full model, making it computationally efficient to deploy across existing generative systems.
  • Designed to support content provenance and help distinguish AI-generated images from authentic ones to reduce misinformation risk.
  • Part of Meta's broader responsible AI efforts, complementing initiatives like the C2PA content credentials standard.

Cached Content Preview

HTTP 200Fetched Apr 9, 20269 KB
Stable Signature: A new method for watermarking images created by open source generative AI 
 
 
 
 
 
 
 
 
 
 
 
 

 Products 
 AI Research 
 The Latest 
 About 
 Get Llama 
 Try Meta AI 
 
 Research Stable Signature: A new method for watermarking images created by open source generative AI 

 October 6, 2023 • 6 minute read AI-powered image generation is booming and for good reason: It’s fun, entertaining, and easy to use. While these models enable new creative possibilities, they may raise concerns about potential misuse from bad actors who may intentionally generate images to deceive people. Even images created in good fun could still go viral and potentially mislead people. For example, earlier this year, images appearing to show Pope Francis wearing a flashy white puffy jacket went viral. The images weren’t actual photographs , but plenty of people were fooled, since there weren’t any clear indicators to distinguish that the content was created by generative AI.

 RECOMMENDED READS

 Introducing CM3leon, a more efficient, state-of-the-art generative model for text and images 
 I-JEPA: The first AI model based on Yann LeCun’s vision for more human-like AI 
 Introducing Make-A-Video: An AI system that generates videos from text 
 At FAIR, we’re excited about driving continued exploratory research in generative AI, but we also want to make sure we do so in a manner that prioritizes safety and responsibility. Today, together with Inria , we are excited to share a research paper and code that details Stable Signature, an invisible watermarking technique we created to distinguish when an image is created by an open source generative AI model. Invisible watermarking incorporates information into digital content. The watermark is invisible to the naked eye but can be detected by algorithms—even if people edit the images. While there have been other lines of research around watermarking, many existing methods create the watermark after an image is generated.

 More than 11 billion images have been created using models from three open source repositories, according to Everypixel Journal . In this case, invisible watermarks can be removed simply by deleting the line that generates the watermark. 

 
 While the fact that these safeguards exist is a start, this simple tactic shows there’s plenty of potential for this feature to be exploited. The work we’re sharing today is a solution for adding watermarks to images that come from open source generative AI models. We’re exploring how this research could potentially be used in our models. In keeping with our approach to open science, we want to share this research with the AI community in the hope of advancing the work being done in this space.

 How the Stable Signature method works

 
 Stable Signature closes the potential for removing the watermark by rooting it in the model with a watermark that can trace back to where the image was created.

 Let’s take a look at how this process works with the belo

... (truncated, 9 KB total)
Resource ID: 8ee430e614d4e78b | Stable ID: sid_SxdnGhl4SO