Studio Ghibli draws a polite line: CODA pushes OpenAI on Sora 2 training
One battle cools, another heats up. As the UK High Court rules against Getty Images' claim over Stable Diffusion's training, Studio Ghibli and major Japanese rights holders have had enough of their art being scraped and mimicked.
Through the Content Overseas Distribution Association (CODA), members including Studio Ghibli, Bandai Namco, and Square Enix sent a letter to OpenAI. The ask is simple and firm: don't use our content for machine learning without permission, and take infringement claims seriously.
What CODA actually asked for
- Refrain from using members' content for machine learning without prior permission.
- Respond sincerely to member claims and inquiries tied to Sora 2's outputs.
- Ensure AI development happens alongside protection of rightsholders and creators.
CODA also highlights a key legal point: under Japan's copyright system, permission is generally required before use. You can't sidestep liability with a later opt-out.
OpenAI's position (and the sticking point)
OpenAI first said copyright owners would need to opt out if they didn't want Sora training on their work. After backlash, it backtracked and now claims to block generating copyrighted IPs unless owners opt in.
The unresolved issue is training that may have already happened. Blocking outputs isn't the same as getting permission up front, which is what CODA is pressing for.
Why this matters to creatives
Style theft isn't just a vibe issue-it's a rights issue. If your visual language or characters are absorbed into a model without consent, the downstream impact hits licensing, brand equity, and client trust.
Japan's government has asked OpenAI not to replicate Japanese artwork. Hayao Miyazaki's stance remains clear; years ago he called AI-generated animation "an insult to life itself." The cultural context is catching up with the tech.
Protect your work: practical moves
- Lock your licenses. Add explicit "no AI training/no dataset creation" clauses to contracts, portfolio terms, and client SOWs.
- Post a clear rights notice. Put usage terms on your site and portfolio pages (including "no scraping," "no ML training").
- Track provenance. Use content credentials like C2PA to attach verifiable metadata to images and video.
- Monitor and document. Save evidence of suspected infringements. Centralize reports and takedowns so you're not reinventing the wheel each time.
- Train on clean data. If you build internal models, use only licensed, commissioned, or in-house assets. Keep a paper trail.
- Avoid living-artist prompts. Build moodboards from licensed references and public-domain sources; describe attributes, not names.
If you're using AI tools right now
- Check model policies. Prefer tools with clear opt-in datasets and published provenance. Keep a log of versions, prompts, and assets used.
- Add a rights checkpoint. Before delivery, confirm: licensed inputs, allowed model use, brand/IP clearance, client sign-off.
- Segment your pipeline. Keep AI tests separate from production assets to avoid contaminating licensed libraries.
The bigger signal
This isn't a ban on AI. It's a boundary. CODA's letter says: ask permission first, and respect the people who make the work. That's a standard every creative and every toolmaker can operate on.
Level up your AI workflow (without burning your IP)
- Compare tools and policies for video: Generative video tools
- Explore compliant visual tools and references: Generative art tools
Creativity scales with constraints. Set yours in writing, choose your tools with intention, and keep your style yours.
Your membership also unlocks: