Full List of Books Behind Anthropic AI Revealed as $1.5bn Settlement Looms

A public list of books used to train Anthropic's AI puts authors on alert. Check if your work appears, tighten contracts, and treat training as a licensable right.

Categorized in: AI News Writers
Published on: Oct 03, 2025
Full List of Books Behind Anthropic AI Revealed as $1.5bn Settlement Looms

Anthropic training list drops: what it means for writers and what to do next

A database listing books reportedly used to train Anthropic's AI model is now public, emerging alongside a court case that is expected to lead to a $1.5bn settlement.

Whether you publish through a house or independently, this matters. It's about licensing, control over your work, and setting the terms for how your writing is used in AI systems.

Why this matters

  • Transparency: You can now check if your titles appear in training data.
  • Rights: Training is a distinct use. It can be licensed or restricted in contracts.
  • Money: A large settlement signals that compensation frameworks are coming.
  • Precedent: Expect clearer rules around data usage and opt-outs in the near future.

How to check if your work is listed

  • Search by author name, pen name, and series name.
  • Search by title, subtitle, and ISBN (10 and 13).
  • Check foreign editions, translations, and audiobook editions.
  • Look for short works that appeared in anthologies or collections.

If your book appears in the database

  • Document the listing: screenshots, URLs, timestamps.
  • Pull your contracts: note any clauses on text/data mining, "machine learning," "AI training," or "derivative models."
  • Contact your agent or publisher's rights team and request their position in writing.
  • Join or consult a professional body for coordinated updates and guidance. For example, the Authors Guild maintains AI resources here.
  • Log any unauthorized distributions of your files (pirated PDFs, mass mirrors) that may have fed datasets.

Smart contract updates for your next deal

  • Define "AI training" and "data mining" as separate, licensable uses.
  • Reserve training rights to the author unless explicitly licensed for a fee.
  • Add audit rights for dataset use, disclosure, and takedown on breach.
  • Require explicit consent for model fine-tuning on your work.
  • Include reversion or compensation triggers if your work enters a model without approval.

Protect your work online

  • Control file exposure: limit full-text PDFs and large excerpts on public pages.
  • If you host samples, use robots.txt and meta tags that signal "noai" where appropriate. They're not a guarantee, but they set clear intent.
  • Watermark review copies and track distribution.
  • Monitor major piracy sites and file-sharing forums on a monthly schedule.

Using AI on your terms

AI can help with outlines, research support, and editing passes-without handing over your rights. Keep your drafts local when possible, remove sensitive material before prompts, and keep a changelog so your voice stays intact.

If you want structured, practical training on writing with AI (without losing your style), explore our curated resources for writers here.

What to watch next

  • Settlement terms: they may shape future licensing rates and disclosure standards.
  • Collective licensing: expect proposals that let rights holders opt in for payment.
  • Regulatory guidance: keep an eye on the U.S. Copyright Office's AI updates here.
  • Publisher policies: ask how your publisher handles AI training permissions and reporting.

Bottom line

Treat AI training as a right you can license, limit, or refuse. Audit the database, tighten your contracts, and control what you put online. The writers who act now will set the terms for everyone else.

This is not legal advice. Speak with an attorney for guidance on your specific situation.