Dutch Authors and Journalists to Meta: Stop Using Our Work to Train AI

Dutch writers and journalist groups demand Meta stop training Llama on copyrighted Dutch works, citing illegal datasets. Legal action looms; authors urged to document and opt out.

Categorized in: AI News Writers
Published on: Feb 28, 2026
Dutch Authors and Journalists to Meta: Stop Using Our Work to Train AI

Dutch writers' groups demand Meta stop using copyrighted texts to train Llama

Dutch journalists' and writers' unions have formally demanded that Meta stop using copyrighted works by Dutch authors, reporters, and translators to train its Llama AI models. They say Meta relied on texts taken from illegal datasets, without permission or payment. Here's what it means - and what to do now.

The Dutch Association of Journalists (NVJ), the Authors' Union (Auteursbond), and writers' rights organization Lira sent a demand letter urging Meta to immediately cease use of the disputed material and halt distribution of models trained on it. If Meta does not respond, a summons is expected, according to NVJ chair Thomas Bruning.

The dispute stems from US court filings alleging Meta downloaded tens of terabytes of text from an illegal online database containing books, articles, and other copyrighted works, including those by Dutch authors. The unions argue that copying and distributing such texts without consent violates copyright.

Why this matters for writers

This fight goes beyond one company. It targets the core question: can AI firms train on your work without consent or compensation? If this case advances, it could shape how training data is sourced, documented, and licensed across Europe.

EU law allows text-and-data mining with conditions, and rights holders can reserve their rights for commercial mining. If you've clearly opted out and your works were still used for training, your case gets stronger. For context, see the EU text and data mining rules.

What you can do right now

  • Coordinate with your local union or collecting society (NVJ, Auteursbond, Lira) to join collective action and get updates.
  • Make a clean inventory of your works: dated drafts, publication links, ISBN/ISSN, contracts, and assignment letters. This evidence matters if damages are calculated.
  • Check whether your books or articles appear in common training datasets. A practical starting point is the Have I Been Trained search by Spawning: haveibeentrained.com.
  • Reserve your rights for text-and-data mining where possible (site policies, robots directives, license terms) and keep a record of those reservations.
  • Review your publisher and platform terms. Opt out of AI training if an option exists, and add language that prohibits model training without explicit permission and payment.
  • Limit full-text samples on public portfolios; use excerpts or PDF previews with clear copyright notices.
  • Stay sharp on tools and workflows that help you protect, track, and still benefit from AI ethically. See AI for Writers.
  • Exploring legal paths or compliance steps? Useful primers live here: AI for Legal.

What to watch next

  • Whether Meta responds to the demand letter or faces a summons in the Netherlands.
  • If Dutch or EU authorities push for broader transparency and licensing around training data.
  • How other ongoing publisher and author cases influence this dispute and potential settlement frameworks.
  • Whether AI companies increase dataset disclosure and offer standardized opt-outs or licensing deals.

Bottom line: keep your records tight, make your rights explicit, and coordinate. Individual effort helps, but collective pressure moves the needle.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)