“PhotoGuard” by MIT Defends Your Photos Against Illicit AI Edits

As AI models advance in their abilities, the power to both create and edit images is increasingly common, with industry leaders like Adobe and Shutterstock at the forefront.
Yet, this growth in AI capabilities brings new challenges such as unauthorized alterations or direct theft of digital images. MIT CSAIL’s “PhotoGuard” could help combat these issues while traditional watermarking techniques address the latter.


The “PhotoGuard” strategy involves subtly tweaking certain pixels in an image. This imperceptible manipulation, known as “perturbations”, interferes with an AI model’s ability to comprehend the photo’s contents and latent mathematical representations. Unlike watermarks, these encoded perturbations are indiscernible to humans. However, they obstruct and confuse AI systems attempting to parse the image.

An “encoder attack” uses perturbations to target the complex pixel position and color descriptions latent within AI models. This scrambles the AI’s comprehension of the encoded image. Even more advanced “diffusion attacks” trick the AI into perceiving an entirely different photo. Any attempted edits by the AI then become distorted and unrealistic.

As Lead author Hadi Salman explains, “The encoder attack confuses the model into perceiving the input image as something else, while the diffusion attack guides the diffusion model to make edits towards a targeted image.” Though not yet foolproof, with vulnerabilities to reverse engineering, this breakthrough by CSAIL marks a major step towards securing digital media against AI exploitation.

Salman argues that a collaborative effort between model developers, and policymakers is essential to robustly defend against unauthorized AI image editing. The urgency of this issue demands tech companies immediately invest in engineering protections against their own products’ potential for harm. As AI generation of media accelerates, maintaining ethical standards surrounding consent and ownership grows increasingly vital. Initiatives like PhotoGuard point towards a solution, immunizing images through subtle signals imperceptible to users yet powerfully decoded by machines. Such encoding could help uphold creative rights in the dawning age of artificial intelligence.