Disclaimer: We may earn a commission if you make any purchase by clicking our links. Please see our detailed guide here.

Follow us on:

Google News

MIT Researchers Develop PhotoGuard to Use AI to Protect AI Image Manipulation

Yusuf Balogun
Yusuf Balogun
Yusuf is a law graduate and freelance journalist with a keen interest in tech reporting.

Join the Opinion Leaders Network

Join the Techgenyz Opinion Leaders Network today and become part of a vibrant community of change-makers. Together, we can create a brighter future by shaping opinions, driving conversations, and transforming ideas into reality.

As artificial intelligence (AI) continues to advance, so does its potential for malicious applications. One of the growing concerns is AI image manipulation, where sophisticated algorithms can create highly convincing fake images. However, the same technology that fuels this threat can also be harnessed to counter it effectively.

In the quest to create such a new measure, researchers from Massachusetts Institute of Technology, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a technique named  “PhotoGuard” that uses perturbations – minuscule alterations in pixel values invisible to the human eye but detectable by computer models – that effectively disrupt the model’s ability to manipulate the image.

Hadi Salman, an MIT graduate student in electrical engineering and computer science leads the paper about PhotoGuard, alongside Alaa Khaddaj and Guillaume Leclerc MS ’18, as well as Andrew Ilyas ’18, MEng ’18; all three are EECS graduate students and MIT CSAIL affiliates. The team’s work was based on work supported by the U.S. Defence Advanced Research Projects Agency and partially completed on the MIT Supercloud compute cluster with funding from the National Science Foundation of the United States and Open Philanthropy.

Using PhotoGuard to Protect AI Image Manipulation

To produce these disturbances, PhotoGuard employs two alternative attack strategies. The simpler encoder attack attacks the latent representation of the image in the AI model, tricking it into thinking the image is random. The more complex diffusion model establishes a target image and optimizes the perturbations to produce a final image that is as similar to the target as possible.

The team perturbed the input space of the original image during implementation. These perturbations are subsequently applied to the images during the inference stage, providing a strong defense against unauthorized modification. The diffusion attack demands a substantial amount of GPU RAM and is more computationally costly than its more straightforward brother. According to the team, minimizing the problem by roughly simulating the diffusion process with fewer steps makes the technique more useful.

“Consider the possibility of fraudulent propagation of fake catastrophic events, like an explosion at a significant landmark. This deception can manipulate market trends and public sentiment, but the risks are not limited to the public sphere. Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale,” – Hadi Salman.

“The progress in AI that we are witnessing is truly breathtaking, but it enables beneficial and malicious uses of AI alike,” says MIT professor of EECS and CSAIL principal investigator Aleksander Madry, who is also an author on the paper. “It is thus urgent that we work towards identifying and mitigating the latter. I view PhotoGuard as our small contribution to that important effort.”

Consider a creative endeavor, for instance, to depict the attack more. The target image is an entirely distinct drawing from the original drawing that makes up the original image. The diffusion assault is similar to making minute, undetectable adjustments to the first drawing so that it starts to resemble the second drawing to an AI model. However, the original drawing is still visible to the naked eye.

By doing this, you can preserve the original image from manipulation by making sure that any AI model that tries to edit it accidentally alters it as if it were the target image. The result created by PhotoGuard is a picture protected from unauthorized editing by AI models while remaining visually unchanged for human observers.

SourceMIT News


Partner With Us

Digital advertising offers a way for your business to reach out and make much-needed connections with your audience in a meaningful way. Advertising on Techgenyz will help you build brand awareness, increase website traffic, generate qualified leads, and grow your business.

Power Your Business

Solutions you need to super charge your business and drive growth

More from this topic