Salesforce sued by bestselling authors over AI training: what writers need to know
Jonathan Franzen, Jodi Picoult, George Saunders, and other authors have filed a federal lawsuit against Salesforce, alleging their books were used without permission to train AI models. The complaint says Salesforce's partnership with Cohere fed copyrighted works into models that power Einstein Copilot and other generative features. The suit seeks damages and an injunction to stop further use of their content.
Salesforce has not commented publicly. The company is expected to point to licensing and data sourcing protocols. Cohere is also named in the suit.
Why this matters to working writers
- Consent and compensation: The case challenges whether training AI on full-text books without permission is lawful or requires licenses.
- Precedent: Similar cases against other AI companies are in motion. Outcomes here could ripple into publishing contracts and platform policies.
- Enterprise AI risk: If models trained on books are found infringing, features inside business tools may face limits, audits, or licensing costs.
What could happen next
- Discovery could reveal training datasets, sources, and any licenses or opt-outs that were honored or ignored.
- Courts may clarify how fair use applies to large-scale text training.
- Settlements are possible, including licensing schemes for books used in training.
- Injunctions could force product changes to AI assistants used in sales, service, and productivity apps.
Practical steps for writers now
- Register your works. Registration strengthens your position in court and unlocks statutory damages. See the U.S. Copyright Office overview on fair use and rights: copyright.gov/fair-use.
- Lock down contracts. Add clear language that bans training use without written consent, requires disclosure of data sources, and includes indemnity if a client's AI tool infringes.
- Publish a rights notice on your site stating: "No text may be used to train AI systems without explicit permission." It sets expectations and supports enforcement.
- Control access. Limit full-text previews, rate-limit scraping, and keep manuscripts behind authenticated portals where possible.
- Monitor leaks. Set alerts for pirated PDFs/EPUBs and send fast takedowns. The less unauthorized full text floats around, the better.
- Choose AI tools with accountability. Favor vendors that disclose data sources, offer opt-outs, and provide legal indemnity in writing.
- Track clauses from publishers and platforms. Watch for blanket "data usage" permissions that quietly include AI training.
- Document everything. Keep dated copies of your policies, contracts, and takedown notices. Paper trails matter.
Key questions this case may answer
- Is large-scale training on books without permission fair use or infringement?
- Who is responsible: the platform embedding the model, the model provider, or both?
- Will courts force transparency about datasets and licensing?
- Could enterprise AI features face limits unless they use licensed corpora?
"This is about protecting the value of human creativity," a plaintiffs' spokesperson said. Expect more creators to press for consent, compensation, or both.
Build your AI skill set without giving up your rights
If you're integrating AI into your writing workflow, vet tools and learn practical, rights-respecting methods. Curated options for writers are here: AI tools for copywriting and AI courses by job.
Your membership also unlocks: