Apple Faces Fresh Copyright Lawsuit Over AI Training on Pirated Books
Apple faces a lawsuit accusing it of using authors’ books without permission to train its AI models. This adds to growing legal pressure on tech firms over copyrighted AI training data.

Apple Faces New Copyright Lawsuit Over AI Training Data
Apple is under fire again as two authors have filed a lawsuit accusing the company of using their books without permission to train its artificial intelligence models. The lawsuit, filed in Northern California federal court, alleges that Apple incorporated unauthorized copies of works by Grady Hendrix and Jennifer Roberson into its OpenELM large language models.
The complaint highlights that Apple neither credited nor compensated the authors for their work, which the plaintiffs claim was sourced from a dataset of pirated books widely known in machine learning research circles.
Growing Legal Pressure on AI Companies
This lawsuit adds Apple to a growing list of tech companies facing legal challenges over AI training data. On the same day, AI startup Anthropic announced a $1.5 billion settlement with a group of authors who claimed their works were used without proper permission to train the Claude chatbot. Although Anthropic did not admit wrongdoing, this deal is considered the largest copyright recovery in history.
Other tech giants like Microsoft, Meta Platforms, and OpenAI have also faced similar accusations. Microsoft was sued in June by writers alleging unauthorized use of their works to train its Megatron model. Meta and OpenAI, with backing from Microsoft, have been accused of similar practices.
What This Means for Apple
Apple's AI lawsuit comes as the company tries to grow its AI capabilities with the OpenELM models, which aim to offer smaller, more efficient alternatives to systems from OpenAI and Google. These models are intended to be integrated across Apple’s devices and software.
The plaintiffs argue that using pirated content undermines Apple’s efforts and exposes the company to claims of unjust enrichment. Analysts note that Apple’s image as a privacy-focused, user-first company could suffer significantly if courts find its AI models were trained on stolen material. The reputational damage may outweigh any financial penalties.
Legal Debate Over Copyright and AI Training
The lawsuits highlight ongoing uncertainty about how copyright laws apply to AI training. Supporters of "fair use" argue that AI exposure to text is similar to a human reading, offering context for generating new content rather than copying original works. Opponents argue that ingesting copyrighted works without licenses denies creators rightful compensation.
Anthropic’s settlement may influence the outcome of such cases. By agreeing to a large payout without admitting liability, it signals the risks involved in fighting these suits in court. Apple could face similar financial consequences if the case moves forward.
Writers and content creators should closely watch these developments, as they could reshape how AI companies use copyrighted material and how creators are compensated for their work.
- Learn more about AI and copyright issues at Complete AI Training.
- Explore AI courses tailored for writers and content creators here.