Creative Industry AI Fightback Sinks Into 'Murky Waters'
The UK High Court has largely shut down a high-profile attempt to stop tech firms from training AI on copyrighted images. The immediate takeaway for creatives: the door isn't closed, but it's barely ajar-and the burden of proof remains heavy.
The case centered on Getty Images versus Stability AI. Getty withdrew its core UK copyright and database claims after acknowledging it couldn't prove the training happened in the UK, which is required under UK law. That meant the court didn't decide the big question-whether training on copyrighted material is lawful in the UK.
What the Court Actually Said
Two issues were left to decide. First, whether making Stable Diffusion model weights available was a secondary copyright infringement. Second, whether outputs containing watermark-like features infringed Getty or iStock trademarks.
The court dismissed the secondary copyright claim. It found the model wasn't an "infringing copy" because it learns patterns from data rather than storing or reproducing original works. In plain English: the model doesn't contain your images.
On trademarks, Getty did win-narrowly. Earlier versions of the model produced outputs with Getty-style watermarks, which the court deemed trademark infringement. But the judge stressed these were historic and extremely limited cases. The court didn't give a definitive view on passing off.
Why This Matters for Creatives
The ruling exposes a practical gap. If training happens outside the UK, UK rights are hard to enforce. That's a real-world hurdle for photographers, illustrators, and agencies.
As one legal expert observed, the finding that the model doesn't store copies of protected works will concern the creative industry and embolden AI developers. Another noted the result feels like a win for AI on secondary infringement, while leaving copyright and AI training "as murky as before."
Key Implications
- Training legality in the UK remains unresolved. The court didn't rule on it.
- Territorial proof is everything. If you can't show acts occurred in the UK, your UK claims struggle.
- Models learning "patterns" vs. copying matters. That framing helped Stability defeat secondary infringement.
- Trademarks still bite. Watermark-like outputs can trigger liability, though past examples were limited.
Practical Moves for Creatives
- Strengthen licensing: Add explicit "no AI training" clauses, define prohibited uses, and require downstream pass-through terms for platforms and clients.
- Control access: Use gated delivery (APIs, portals), watermark originals discreetly, and track usage where possible.
- Provenance signals: Adopt content credentials (C2PA-style) and keep verifiable logs of creation and publication.
- Monitoring and takedown: Set up alerts for misuse, collect evidence (timestamps, URLs, outputs), and pursue platform takedowns swiftly.
- Contracts first: For commissioned work, specify AI training permissions, indemnities, and audit rights.
- Think trademarks: Protect brand marks and watermarks; misuse may be easier to act on than copyright in this context.
What's Next
The US case rolls on, and UK policy may shift. The Government has flagged potential changes, with results from its copyright and AI consultation still pending. For now, the combination of territorial limits and the "models learn patterns" view keeps enforcement tough.
If you want the policy backdrop, review the UK Intellectual Property Office's AI and IP consultation materials here: UK IPO: AI, copyright and patents.
Bottom Line
The legal fight isn't over, but this ruling signals a harder road for UK copyright claims tied to AI training-especially against overseas developers. Focus on what you can control: clean contracts, clear rights, better provenance, and faster enforcement. Keep creating, and protect the business around the work.
Want to sharpen your AI workflow without risking your IP? Explore role-based training paths: AI courses by job.
Your membership also unlocks: