Publishers face detection crisis as AI-written books slip past quality checks
Hachette's Orbit imprint halted US publication of Shy Girl, a horror novel by Mia Ballard, after discovering the manuscript could be up to 78% AI-generated. The UK edition, published by Wildfire in November 2025, has been discontinued. Ballard denies writing the book with AI, saying an editor she hired used the technology on a self-published version.
The discovery has forced publishers and literary agents to confront a hard truth: detection tools don't work reliably, and determined authors can evade them.
Detection tools are failing
Literary agent Kate Nash said she initially missed AI-generated query letters because they appeared more polished than usual. Only when she spotted an actual AI prompt - "Rewrite my query letter for Kate Nash including a comp to a writer she represents" - could she recognize the pattern.
An editor at one of the "big five" publishing houses described feeling a "cold shiver" when the Shy Girl story broke. "We make it very clear to authors what we expect, we get them to sign contracts and we run their work through multiple AI detection tools," the editor said. "But we know all this is fallible."
Prof Patrick Juola, a computer scientist who specializes in authorship attribution, was blunt: "I don't want to call AI detection tools a scam, but it's a technology that simply doesn't work."
He compared the problem to antibiotic resistance. AI manufacturers continually upgrade their systems, so any detection technology that gains traction will simply be circumvented by better AI tools.
The cat-and-mouse game accelerates
Mor Naaman, head of the social technologies research group at Cornell Tech, said the situation will worsen. "AI learns very quickly how to avoid AI detection. We're not quite there yet, but soon publishers won't stand a chance," he said.
Nikhil Garg, an assistant professor at Cornell Tech's Jacobs Institute, raised a complicating factor: sophisticated authors can edit AI-generated text, test it against detection tools, and revise again until it passes. "At some point, you have to ask: has it become their own work anyway, despite the AI?" he said.
The grey zones expand
Naaman acknowledged that while Shy Girl appeared to be an obvious case of AI generation, the boundaries are increasingly blurred. "We all work in an AI-hybrid world now. When does something become an AI-generated book, rather than just using AI like I use a spellchecker, to fix my grammar or maybe spark ideas?" he asked.
The cultural argument for caring matters more than technical definitions, Naaman said. AI tends to produce bland, homogeneous work. It cannot generate the diverse creativity of human minds or reflect the messy, difficult experience of being human - the very work literature exists to capture.
"AI subtly inserts specific viewpoints into its work that are driven by algorithms of all-too-powerful corporations," he said. If AI absorbs entry-level writing work, emerging authors lose the chance to develop their craft before creating significant work.
Trust becomes the only tool
Anna Ganley, chief executive of the Society of Authors, launched the Human Authored scheme earlier this month to identify works written by humans. The system relies on a single mechanism: trust.
Nash emphasized the stakes. "Readers trust writers. Writers need to continue to trust themselves over machines," she said. "The bond between reader and writer is likewise based on trust; the engagement can operate on many levels, but most of all, it must be meaningful."
For writers, the practical implication is clear: technical safeguards have limits. The only protection against suspicion is honesty about your process.
Learn more about AI for Writers and understand how Generative AI and LLM systems work to make informed decisions about your own use of these tools.
Your membership also unlocks: