AI Tool LightShed Bypasses Art Protection, Leaving Creators Exposed to Unlicensed Training

A new study reveals that popular art protection tools like Glaze and NightShade can be easily bypassed by LightShed, exposing artists’ work to unauthorized AI training. This calls for stronger, adaptive defenses to safeguard creative content.

Categorized in: AI News Creatives
Published on: Jun 29, 2025
AI Tool LightShed Bypasses Art Protection, Leaving Creators Exposed to Unlicensed Training

New Study Exposes Flaws in Art Protection Tools Against AI Training

Popular tools like Glaze and NightShade have been created to protect artists by embedding subtle distortions into images online, aiming to prevent unauthorized AI training. These distortions confuse AI models, stopping them from learning from the artwork. However, a recent study by researchers from the University of Cambridge, TU Darmstadt, and UT San Antonio reveals that these defenses can be bypassed with ease.

The research introduces LightShed, a method that detects and removes these protective distortions with a 99.98% success rate. This means that even with state-of-the-art tools, artists’ work remains vulnerable to being exploited for AI training without consent.

How LightShed Defeats Art Protection

LightShed reverses the image-poisoning technique in three clear steps:

  • Detect the hidden distortions embedded by tools like Glaze or NightShade.
  • Learn the distortion patterns from known examples.
  • Restore the image to its original, clean form.

This process effectively neutralizes the protection, allowing AI developers or others to reuse art in training datasets regardless of the artist’s efforts to block it.

Tools like Glaze and NightShade have been downloaded nearly nine million times because they offer a way for creatives to protect their work in an environment where consent is often overlooked. Yet, the study shows that these tools are not enough.

What This Means for Creatives

As AI models continue to train on vast amounts of online content, artists, writers, and journalists find themselves in a tough spot. Many feel their work is being used without permission and are pushing back legally. For example, the BBC is taking action against the startup Perplexity for using its content without authorization. Meanwhile, the UK government’s proposal to allow unlicensed AI training on copyrighted material has sparked controversy.

High-profile creatives such as Paul McCartney and Elton John have publicly condemned the unauthorized use of their work, calling it “criminal.” Multiple lawsuits are underway against major AI companies like OpenAI, Meta, Stability AI, and Midjourney. These cases focus on unauthorized training involving copyrighted text, images, and music.

Looking Ahead: Defenses Must Adapt

The researchers behind LightShed emphasize that exposing these vulnerabilities is not about undermining artists’ defenses but about encouraging the development of stronger, more adaptive protections. Collaboration between artists, developers, and researchers will be key to creating tools that can keep pace with evolving AI capabilities.

For creatives interested in understanding AI’s impact and exploring how to protect their work, resources and courses are available that can provide practical insights and skills. Complete AI Training offers up-to-date courses that cover these challenges and how to navigate AI tools effectively.