Adobe Hit with Copyright Lawsuit Over AI Training: What Writers Should Do Now
Adobe is facing a proposed class-action lawsuit that claims the company trained its AI tools on copyrighted books without permission. The complaint, filed in California federal court, says Adobe used pirated copies-including works by author Elizabeth Lyon-to train its Slim LM microlanguage models for document-related tasks on mobile devices. It's the first widely reported case aimed directly at Adobe's AI training practices, and it adds to a growing line of lawsuits against tech companies over AI training data. Source: Reuters
Why this matters: if your books were scraped to train an AI, your IP could be fueling outputs that compete with your own work. The legal outcome will influence licensing norms, payouts, and how AI companies source training data moving forward.
What the complaint alleges
- Adobe trained its Slim LM models using pirated copies of books, including Lyon's titles.
- The models were built to respond to user prompts for document-related tasks on mobile devices.
- The lawsuit seeks damages and aims to represent a class of copyright holders whose works were allegedly used without consent.
- The case lands amid multiple author-led lawsuits against other AI companies, including OpenAI and Anthropic, over similar claims.
How this could affect your work
If courts decide training on pirated works infringes rights, expect stricter licensing, more opt-out mechanisms, and better data provenance. If not, scraping continues and your best defense becomes contracts, distribution choices, and timely registration.
Practical steps for writers right now
- Register your books and key works with the U.S. Copyright Office. It strengthens your position in any dispute.
- Audit exposure: limit full-text PDFs online, use excerpts on your site, and watermark files shared with partners.
- Add "no AI training" clauses in publishing, platform, and client agreements. Keep a template ready.
- Monitor misuse: set Google Alerts for unique phrases from your books and search AI outputs for suspicious matches.
- Use site-level signals (robots and policy pages) to state "no training" restrictions for your content.
- Keep distribution clean: sell through platforms that respect licensing and offer a takedown path.
- Document everything: drafts, timestamps, and contracts. A paper trail helps in negotiations or court.
- Join or track relevant guilds, associations, or class actions to stay informed on filings and opt-ins.
Using AI without risking your rights
Use AI as a drafting assistant-not as a source of unlicensed text. Prefer tools that offer clear data provenance and legal indemnity at the plan level, and keep your prompts, sources, and edits documented so your final work is clearly yours.
If you want a vetted way to upskill without compromising your IP, explore curated options that focus on practical, rights-respecting workflows: AI courses by job and AI tools for copywriting.
The bottom line
This case is another signal: data provenance and licensing are becoming non-negotiable. Protect your catalog, tighten your agreements, and keep learning how to use AI as a tool without giving away your rights.
Your membership also unlocks: