Stability AI's UK win leaves the core copyright question unanswered
Stability AI largely prevailed against Getty Images in England's High Court, but the ruling dodged the key issue everyone cares about: whether training AI models on copyrighted works requires permission. The case was expected to set a clear marker. It didn't.
The result offers little guidance for other AI companies or rightsholders. It does, however, send a strong signal on trademarks and a narrow signal on copying.
What the High Court actually decided
The judge found trademark infringement where Stable Diffusion outputs included Getty-style watermarks. That's a straightforward brand issue: outputs that mimic a source mark can confuse consumers and dilute the mark.
On copyright, the court rejected secondary infringement because, on the evidence, Stable Diffusion does not store or reproduce Getty's works. Think of it as a finding about model internals and artifacts, not a blanket approval of training practices.
What the court did not decide
The central dispute-do you need a license to train on copyrighted material?-never reached a conclusion. Getty dropped the training claim mid-trial due to weak evidentiary foundations.
No precedent was set on training legality. Any reliance on this case for training policy would be risky.
Why this matters for in-house and litigators
Trademark risk is real at the output layer. If your model can spit out marks or watermark-like signals, you need filtering, detection, and prompt/output controls.
Copyright exposure remains jurisdiction-specific and fact-heavy. Without a ruling on training, your risk turns on data provenance, logging, and the exact behavior of your model and tooling.
Practical steps for AI developers
- Deploy watermark and logo filters, including pre- and post-generation checks. Log blocked outputs.
- Tighten data governance: source documentation, license records, opt-out handling, and clear retention/deletion policies.
- Reduce memorization risk: apply deduplication, regularization, and evals for near-duplicate regeneration.
- Implement prompt and output monitoring for brand terms; escalate to legal when flagged.
- Update your terms to restrict users from attempting to reproduce third-party marks or identifiable copyrighted works.
Practical steps for rightsholders
- Preserve evidence of scraped use with timestamps, dataset hashes, and model behavior tests.
- Focus claims where proof is strongest: watermark reproduction, trade mark confusion, contract breaches, or explicit copying in outputs.
- Use machine-readable opt-outs and monitor whether they're respected across updates and retrains.
- Consider licensing frameworks that price training separately from output display or derivative services.
Jurisdictional context worth noting
UK law includes a narrow text-and-data mining exception for non-commercial research. Commercial training falls outside that safe harbor and requires licensing or another legal basis. See UK guidance on TDM exceptions for context: UK IPO: TDM exception.
In the US, fair use remains the battleground, and agency guidance focuses on disclosures and human authorship for registration. For a baseline, review the policy statement here: U.S. Copyright Office: AI.
Ongoing litigation to watch
Getty's US case against Stability was refiled in California after an initial Delaware filing. Expect a more direct run at the training issue there.
Elsewhere, Anthropic reportedly reached a $1.5 billion settlement with a group of authors, and Universal Music ended its copyright claims against Udio as part of a broader commercial deal. The industry is testing every legal theory and business structure at once.
Key takeaways
- No UK precedent on whether training requires permission-the flagship question remains open.
- Trademark exposure from watermark-like outputs is immediate and enforceable.
- Copyright claims will turn on evidence of storage, reproduction, or memorization, plus jurisdictional rules.
- Prepare for discovery: logging, dataset provenance, and model behavior records are now core legal assets.
Action checklist
- Run an output audit for marks/watermarks; implement automated suppression.
- Inventory training sources; map licenses and opt-outs to specific model versions.
- Ship a retention policy for training sets, checkpoints, and user data; enforce access controls.
- Stand up a cross-functional review (legal, security, ML) before each retrain or fine-tune.
- Draft playbooks for takedowns, DMCA notices, and brand complaints with SLA-based responses.
This ruling buys time, not certainty. Treat it as a signal to tighten controls, not a green light.
Your membership also unlocks: