Authors Guild warns editors against uploading manuscripts to AI without permission
The Authors Guild in the US has told editors to stop uploading submitted manuscripts to consumer-facing AI systems without author consent.
The warning marks a direct challenge to a common editorial practice. Editors have increasingly used AI tools to help with manuscript assessment, editing notes, and workflow management. The guild argues this practice violates author rights and exposes unpublished work to systems that may train on the data.
For writers submitting work to publishers and literary magazines, the issue is straightforward: your manuscript may be processed through AI without your knowledge or agreement. Once uploaded, the content could be used to train AI models or stored on third-party servers outside your control.
The guild's position reflects broader concerns about generative AI and large language models. These systems typically require training data, and unpublished manuscripts represent valuable, original content. Authors have not consented to this use.
Publishers and editors should obtain explicit permission before uploading any manuscript to an AI tool, the guild said. This applies whether the system is cloud-based, proprietary, or freely available online.
The guidance comes as the publishing industry grapples with AI adoption across editorial, marketing, and production functions. Some publishers have implemented AI policies. Others have not addressed the issue publicly.
For writers and authors, the takeaway is practical: ask your publisher or editor directly whether your work will be processed through AI systems. Request written confirmation of their data handling practices. If consent is required, provide it in writing so both parties have documentation.
The Authors Guild did not announce enforcement mechanisms or penalties for editors who ignore the warning.
Your membership also unlocks: