Authors Battle Big Tech Over AI’s Use of Their Work

David Baldacci and other authors sue AI firms for using their books without permission to train models like ChatGPT. Courts and Congress debate copyright and fair use in AI development.

Categorized in: AI News Writers
Published on: Jul 20, 2025
Authors Battle Big Tech Over AI’s Use of Their Work

Authors Challenge AI Companies Over Copyright Issues

David Baldacci, well-known for his legal thrillers, recently shared a striking example of AI’s impact on writing. His son asked ChatGPT to create a story in Baldacci’s style—and within seconds, the AI produced a plot filled with characters and twists that felt lifted from Baldacci’s own work. Baldacci described it as if “someone had backed up a truck to my imagination and stolen everything I’d ever created.”

Baldacci is part of a growing group of authors suing AI companies like OpenAI and Microsoft. Their claim: these companies used their books to train AI models such as ChatGPT and Copilot without permission or payment. This is just one of over 40 similar lawsuits making their way through U.S. courts. Authors are now appealing to Congress for support in protecting their work and the integrity of literature.

Legal Battles Gain Momentum

At a recent Senate subcommittee hearing, lawmakers expressed concern over AI companies’ practices. This hearing was the first to focus on how authors are affected by AI training. The day after, a federal judge granted class-action status to a lawsuit against Anthropic, another AI firm accused of using pirated books. Ralph Eubanks, president of the Authors Guild, emphasized the moral weight of this issue, saying it sometimes keeps him awake at night.

These lawsuits have uncovered that some AI companies obtained millions of digitized books through questionable “torrent” sites, bypassing payment to authors and publishers. Creators across fields—artists, musicians, photographers, and journalists—are also demanding legal protections against unauthorized use of their work for AI training.

Industry’s Defense: Fair Use and Innovation

AI companies argue their use of copyrighted material falls under “fair use,” a legal principle that allows limited use of copyrighted works without permission. They claim this is essential to building AI that can perform complex tasks better than humans. Some warn that restricting access to copyrighted content could weaken the U.S. in the global AI race, particularly against China.

At the Senate hearing, Sen. Josh Hawley called this situation “the largest intellectual property theft in American history.” Companies like Meta and Anthropic admit to downloading pirated books but maintain their right to use this content internally to develop advanced language models like Meta’s Llama and Anthropic’s Claude.

Key Court Decisions and Their Implications

Recent court rulings have mostly sided with AI firms on the use of copyrighted material for training, considering it fair use. This is a setback for authors hoping to secure payment for their work’s use. However, courts have allowed parts of lawsuits to proceed, particularly when companies’ methods of obtaining the books may have violated copyright law.

For example, U.S. District Judge William Alsup granted class-action status to the Anthropic case. This means all authors whose books were included without permission could claim damages if the company is found guilty. Anthropic insists it uses copyrighted works to create something new, not to replicate them.

Meanwhile, in the Meta case, the judge dismissed most claims, stating the authors failed to prove harm. Meta stressed that fair use is crucial for developing transformative AI technologies. Yet, the ruling also outlined how authors might prove future harm, such as AI-generated content reducing sales of original works. This argument has yet to be tested in court but suggests ongoing legal battles ahead.

Congressional Response and Future Prospects

Sen. Hawley expressed frustration that current laws offer little protection for authors against unauthorized use by large corporations. He called for legislative changes to address this gap. Sen. Peter Welch introduced the Train Act, a bill enabling creators to find out if their work was used in AI training datasets. This transparency is vital since AI training often involves massive, opaque datasets.

At the hearing, law professor Edward Lee offered a different view, supporting the courts’ recognition of AI training as transformative fair use. He warned against rushing new laws before courts fully decide. Sen. Dick Durbin highlighted the challenge of balancing innovation with protecting creators, asking how artists can compete when AI can generate similar content instantly.

Concerns Beyond Copyright

Authors worry that AI tools not only threaten their income but also the craft of writing itself. Ralph Eubanks mentioned how students increasingly use AI like ChatGPT for essays, which may hinder their ability to develop original ideas. While lawmakers showed some support, AI and copyright issues are unlikely to be a top priority in Congress soon.

Writers facing these challenges may want to stay informed about AI’s evolving role in content creation. For those interested in learning how AI tools work and how to engage with them responsibly, resources like Complete AI Training offer courses tailored to various skills and jobs.

Understanding these developments is essential for writers who want to protect their work and adapt to the changing landscape of creative content.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide