Recent developments in generative modeling and natural language processing make photo-realistic image creation and manipulation straightforward, using tools like DALL’E 2 and Stable Diffusion. Though this progress in generative AI is exciting, it raises fresh worries about eroding trust in photo-realistic visuals.
Forensics, or unobtrusive techniques for identifying computer-generated or modified photographs, is a good starting point. However, existing watermarking techniques can be superimposed atop the image creation process. They work on the principle that an invisible secret message can be embedded in the image and afterward utilized to verify its authenticity. There are a few problems with this:
- Post-generation watermarking is simple to delete in case of a model leak or open-sourcing.
- The watermark can be removed from Stable Diffusion, another open-source project, by simply commenting out a single line of code.
Recent research by Meta AI, Centre Inria de l’Universite de Rennes’, and Sorbonne University Signature technique seamlessly incorporates watermarking into the generation process without altering the underlying architecture. The pre-trained generative model is modified to ensure all generated images successfully mask the specified watermark.
This method offers many advantages:
- The generator and its outputs are both safeguarded. It also makes the watermarking computationally lighter, simpler, and more secure because no extra processing of the created image is needed.
- Model providers may distribute their models to several user groups, each with a different watermark, and check to see if they are being used ethically.
- Further, its AI might be used by media organizations to identify when an image has been computer-generated.
Because of their versatility, the team used Latent Diffusion Models (LDM). This study demonstrates that natively embedding a watermark into all generated images is possible with just a little bit of generative model fine-tuning. Stable Signature does not alter the diffusion process or call for changes to the underlying architecture. Therefore, it works with many different kinds of LDM-based generation techniques. The fine-tuning process involves re-training the LDM decoder using the watermark extractor’s perceptual image loss and the hidden message loss. To prepare the extractor for its work, they use a streamlined version of the deep watermarking technique HiDDeN for pre-training.
The researchers also built a realistic testbed for assessing picture editing applications. AI image detection and model lineage tracking are among the tasks at hand. For instance, even when images generated by the model are cropped to 10% of their original size, the researchers could still detect 90% with only one false positive in every 106 photographs. They demonstrate that the FID score of the generation is unaffected and that the generated images are perceptually identical to the ones produced by the original model across a variety of LDM-related tasks (text-to-image, inpainting, edition, etc.), thereby ensuring the model’s continued utility.
Through this work, the researchers demonstrate the advantages of watermarking over passive detection techniques. They hope to inspire other researchers and professionals to take similar measures before releasing their models to the public.