Authors Guild Updates AI Guidance for Writers Facing Legal Risks
The Authors Guild released updated best practices for writers using AI tools, adding two new sections that spell out specific legal and professional dangers. The revisions come two years after the organization first published AI guidance in February 2024.
The updated framework breaks AI use into four categories: guiding principles, risks to monitor, types of use, and recommended practices. The Guild emphasizes these are guidelines, not rules.
What Writers Need to Know About Copyright and Contracts
Two legal issues stand out. First, AI-generated text cannot be copyrighted. Second, failing to disclose AI-generated content in a copyright registration application constitutes fraud against the Copyright Office.
Most book contracts require authors to warrant that manuscripts are original work. Using undisclosed AI-generated text may breach that warranty and expose writers to legal liability.
Mary Rasenberger, CEO of the Authors Guild, said the updated guidance aims to "provide context and clarity around the legal and ethical questions surrounding the various uses of AI in the writing process."
The Training Data Problem
Every major commercial large language model was trained on books and other writing without author permission, compensation, or control over how the content is used. The Guild has backed legal action against AI companies over copyright infringement claims.
The organization notes that understanding risk levels matters. Not all AI use in writing raises identical concerns, so the framework helps writers assess their specific situation.
For more on AI for Writers and how generative AI and LLM systems work, writers can explore additional resources on how these tools function and their implications.
Your membership also unlocks: