'Don't Steal This Book': Thousands of Authors Protest AI At London Book Fair
An empty book is the loudest object at this year's London Book Fair. "Don't Steal This Book" lists thousands of authors and little else - a direct protest against AI firms using writers' work without permission or payment.
The back cover doesn't mince words: "The UK government must not legalise book theft to benefit AI companies." That message is landing right as policymakers weigh changes that could let AI companies train on copyrighted books unless rights holders opt out.
Among the signatories: Kazuo Ishiguro, Richard Osman, Alan Moore, Marian Keyes, Malorie Blackman, Philippa Gregory and Mick Herron. See the full list at dontstealthisbook.com.
The campaign states: "AI companies are building their products by copying millions of books without permission or payment." And the warning is blunt: "If they don't [pay], this is what we'll be left with: empty pages, writers without pay, and readers deprived of the next book they'll love."
Organizer Ed Newton-Rex calls the AI industry "built on stolen work … taken without permission or payment." He adds: "Generative AI competes with the people whose work it is trained on, robbing them of their livelihoods."
The timing matters. A week from now, UK ministers are due to publish an economic impact assessment and a progress update on proposed copyright changes. The most controversial idea: allowing training on copyrighted work unless a rights holder explicitly opts out. The London Book Fair runs 10-12 March, putting this debate on center stage.
Why this matters for working writers
Training on books without consent undercuts licensing and erodes future income. An opt-out system shifts the admin and legal burden onto individual authors and small presses - the people with the least time and leverage.
If "free" training becomes the norm, publishers face tighter margins, advances shrink, and the long tail - midlist and debut authors - takes the hit first. This isn't abstract. It affects your next deal, your backlist value, and whether your name funds someone else's model.
What you can do now
- Lock down your website. Block AI training crawlers in robots.txt and your server config. It's not a silver bullet, but it's a clear signal of refusal.
Sample robots.txt entries:
User-agent: GPTBot
Disallow: /
User-agent: Google-Extended
Disallow: /
User-agent: CCBot
Disallow: /
User-agent: ClaudeBot
Disallow: /
User-agent: PerplexityBot
Disallow: /
- Update your site terms. State that scraping or using your content for model training is prohibited without written permission.
- Tighten your contracts. Add clauses that ban training use, define "derivative AI outputs," and require compensation for any dataset inclusion.
- Join and support advocacy groups. In the UK, the Society of Authors and ALCS are pushing for fair terms; in the US, the Authors Guild has model contract language and legal actions.
- Coordinate your stance with your publisher and agent. Make sure opt-outs are filed wherever possible, and that your works aren't being licensed into training sets via third parties.
- Stay informed on policy. Track the UK government's AI-and-copyright process so you can respond to consultations and support industry statements.
- Use AI on your terms. Learn tools that help your workflow without handing over IP - prompt safely, keep drafts local, and avoid uploading manuscripts to public bots. For practical guidance, see AI for Writers.
Key dates and links
- London Book Fair: 10-12 March.
- UK government economic impact assessment deadline: 18 March.
- Authors and campaign information: dontstealthisbook.com
The message from the fair is simple: pay for books if you want to build on them. Consent first. Compensation always. Empty pages are not a business model.
Your membership also unlocks: