B.C. Author Leads Class-Action Lawsuit Alleging AI Firm Used Copyrighted Books Without Permission

A B.C. author sues Anthropic PBC for using copyrighted books without permission to train its AI model. The class-action seeks damages and legal costs.

Categorized in: AI News Writers
Published on: Jun 01, 2025
B.C. Author Leads Class-Action Lawsuit Alleging AI Firm Used Copyrighted Books Without Permission

B.C. Author Initiates Class-Action Lawsuit Against AI Company Over Copyright Use

A British Columbia author is taking legal action against Anthropic PBC, a San Francisco-based artificial intelligence company, alleging unauthorized use of his and other authors’ copyrighted works to train its AI language model. The lawsuit, filed on May 23 in the B.C. Supreme Court, seeks to represent a class of Canadian authors in a proposed class-action suit.

Allegations Against Anthropic PBC

James Bernard MacKinnon claims Anthropic developed a large language model (LLM) by training it on extensive text data, including copyrighted books, without obtaining proper licenses or permissions. Such AI models are designed to perform tasks like creative writing, translation, and answering questions by understanding language nuances.

The lawsuit argues that Anthropic knowingly used a dataset called Books3, downloaded from the internet, which contained unlicensed copyrighted material. According to the claim, Anthropic did not seek to pay or obtain permission from the rights holders before using these works to enhance its AI technology.

Specifics of the Claim

  • MacKinnon states that two books he authored or co-authored were included in the Books3 dataset without his consent.
  • The claim accuses Anthropic of deliberately concealing the use of copyrighted material so that the AI model itself would not identify the source content as protected.
  • Anthropic is said to have benefitted financially from this approach, positioning itself ahead of competitors by leveraging unauthorized content.
  • The company, founded in January 2021 by former OpenAI employees, has received investments from Amazon and Google affiliates.
  • Anthropic released its LLM product called Claude in March 2023, allegedly trained using the unlicensed dataset.

Legal Demands and Next Steps

MacKinnon’s lawsuit seeks damages, disgorgement of profits earned through unauthorized use of copyrighted works, punitive damages, and coverage of legal costs. The claim describes Anthropic’s conduct as “high-handed, arrogant, and displaying reckless disregard” for the rights of the authors involved.

As of now, Anthropic has not responded to requests for comment. The allegations remain unproven in court.

What Writers Should Know

This case highlights growing concerns among writers and creators about how AI companies source training data. Many AI models rely on vast datasets sourced from the internet, which may contain copyrighted material used without authorization. This lawsuit could set important precedents for how AI developers handle intellectual property rights moving forward.

Writers interested in understanding AI’s impact on their work can explore resources on AI training and copyright issues. For practical courses and tools related to AI and writing, visit Complete AI Training's copywriting tools and latest AI courses.