UK Court Dismisses Most of Getty's AI Claims, Training Law Still Murky

UK court dodged the big call on AI training, leaving creators in limbo and giving devs a lift. It said models learn patterns, but flagged narrow trademark hits for watermarks.

Categorized in: AI News Creatives
Published on: Nov 06, 2025
UK Court Dismisses Most of Getty's AI Claims, Training Law Still Murky

Creative Industry AI Fightback Sinks Into 'Murky Waters'

A recent UK High Court ruling has knocked back a major attempt to stop tech companies from training AI on copyrighted work. The decision sidestepped the core question-whether training on copyrighted content is lawful in the UK-because the claimant couldn't prove the training happened in the UK. That leaves creatives with the same uncertainty, and AI developers with fresh confidence.

The case began in January 2023, when Getty Images filed wide-ranging claims against Stability AI in the UK and US. During the UK trial, Getty withdrew primary copyright and database claims after accepting there wasn't evidence the training occurred within UK jurisdiction. That gap matters: if training happens offshore, UK copyright may be hard to enforce.

What the court actually decided

Two claims remained. First: did making Stable Diffusion model weights available in the UK amount to secondary copyright infringement? Second: did outputs showing watermark-like elements infringe Getty or iStock trademarks?

The court dismissed the secondary copyright claim. The judge found the Stable Diffusion model isn't an "infringing copy" because it learns patterns rather than storing or reproducing original works. As one legal expert put it, "the model did not store any copy of the protected works."

Getty did land a win on trademarks. The court found infringement where AI-generated images included "Getty Images" or "iStock" style watermarks. But the judge called these findings "historic and extremely limited in scope," tied to older models and specific examples.

The court did not make a definitive ruling on the passing off claim.

Why this matters if you create for a living

Two realities now sit side by side. On one hand, the ruling suggests that simply making a trained model available in the UK won't automatically be a copyright problem if the training happened elsewhere. On the other, brands can still push back where outputs show watermark artifacts that resemble protected marks.

Nathan Smith of Katten Muchin Rosenman LLP summarized it well: while parts of the decision favor AI developers, it "arguably leaves the legal waters of copyright and AI training as murky as before." Translation: you'll need process, contracts, and vigilance-not hope.

Practical moves creatives can make now

  • Update your licensing terms: state whether your work can be used for AI training, dataset creation, or model fine-tuning. Make the prohibition explicit where you want it.
  • Use Content Credentials (C2PA) to attach provenance to your work. It's not a lock, but it helps with tracing and trust. Learn more at Content Credentials.
  • Signal your preferences: use platform-level "NoAI" or similar controls where available, and consider robots.txt or meta signals. These are norms, not guarantees, but they reduce silent scraping.
  • Protect previews: show lower-resolution samples and limit unwatermarked high-res downloads to trusted buyers or gated clients.
  • Scrub your prompts and outputs: avoid brand names and stock-library marks in prompts; add negatives like "watermark, logo, text overlay." Manually review outputs before publishing.
  • Keep an audit trail: save prompts, seeds, model versions, and edits. If a question arises, you'll want receipts.
  • Choose vendors with clear policies: prefer tools that publish data sources, allow opt-outs, and filter trademark-like artifacts.
  • Report misuse: document instances where AI outputs include your branding or watermarks and notify the platform or developer.

What this means for agencies and brands

  • QA your creative pipeline for watermark-like artifacts before assets go live. Build a quick visual check into your workflow.
  • Add contract language requiring vendors to ensure AI outputs are free of third-party marks and to warrant the absence of known watermark artifacts.
  • Use internal model governance: track which models and versions are cleared for client work and keep a short, updated "allowed list."

The legal picture isn't settled

The US case against Stability AI continues, and UK policymakers have said they may revisit the law. Until that happens, enforcement against overseas training remains difficult, and the line between "learning" and "copying" will be argued case by case.

If you want a refresher on current UK exceptions (including text and data mining), see the UK government guidance here. It won't solve everything, but it's the baseline the courts will look at.

Bottom line

Models that learn patterns without storing your files aren't, by default, being treated as infringing under UK law-especially if trained offshore. But trademark risk from watermark-like outputs is real, even if limited.

Treat AI like any other tool in your stack: write better contracts, set guardrails, and verify outputs. That's how you keep shipping work you can stand behind.

Want help building safer, smarter AI workflows for design, photography, and video? Explore practical courses and tools at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide