Dutch writers and journalists to Meta: Stop training AI on our work - and pay for it

Three Dutch groups demand Meta stop training AI on writers' and journalists' work without consent or pay. They want talks on a lawful, collective license via Lira.

Categorized in: AI News Writers
Published on: Feb 28, 2026
Dutch writers and journalists to Meta: Stop training AI on our work - and pay for it

Dutch writers and journalists demand Meta stop using their texts for AI training without consent or pay

On 27 February 2026, three major Dutch organizations sent a formal demand to Meta: stop using copyrighted work from writers, translators, and journalists to train AI models without permission or compensation. The Dutch Writers' Guild (Auteursbond), the Dutch Association of Journalists (NVJ), and the Lira Foundation want Meta to immediately cease this practice and enter talks for a lawful, collective licensing arrangement.

What's happening

The organizations state that Meta trained AI models, including Llama, on large amounts of copyrighted material sourced from illegal datasets-books, articles, and other texts copied and distributed without consent or payment. They argue this violates copyright and undermines the economic position of creators.

Liesbet van Zoonen, president of the Auteursbond: "We are not opposed to Large Language Models, but the AI industry is a multi-billion dollar business, which is now illegally using and taking over the work of writers, translators and journalists. That has to stop. It stands to reason that they need to start paying. Authors are not a free resource for AI."

Thomas Bruning, general secretary of the NVJ, adds: "Without our work, there is no AI. Fair compensation is essential to ensure that journalists and writers can continue their work. Their work is indispensable for the development and innovation of Large Language Models. Without that foundation, these models will lose their relevance and quality."

The demand

The letter calls on Meta to stop using illegal data sources and to stop offering AI models trained on them in Europe. It also invites Meta to discuss a collective, lawful licensing framework-managed by Lira-that provides clear terms and fair remuneration.

Why this matters to you as a writer

LLMs feed on text. Yours. If models are trained on your books, articles, and posts without consent, your work funds a product you don't get paid for-and may compete against. That erodes income and control over how your words are used.

A collective approach matters because individual consent or one-off deals aren't realistic at scale. A licensing framework can make payments traceable, conditions clear, and enforcement feasible.

What you can do now

  • Connect with your representative bodies. If you're in the Netherlands, that's Auteursbond, NVJ, and Lira. Elsewhere, contact your local writers' guild or collecting society and ask about AI training rights and opt-out options.
  • Reserve your rights for text and data mining (TDM) where applicable. EU law allows a TDM opt-out; use machine-readable rights statements and site policies to signal restrictions. See the EU directive for context: Directive (EU) 2019/790.
  • Audit your catalog. Keep a clean list of your works, editions, and where they're hosted. Make your licensing terms explicit on your website and in contracts.
  • Document suspected misuse. If you find model outputs closely echoing your work, collect evidence and consult your guild or CMO before taking action.
  • Stay commercially ready. Explore tools and strategies to work with AI on your terms, not against you: AI for Writers.

About the organizations and the approach

Lira is a collective management organization safeguarding the rights of thousands of Dutch authors. The Auteursbond and NVJ represent a large share of professional writers and journalists, including many freelancers. They've taken a collective route because individual consent and payments aren't practical at the scale of AI training.

A collective license via Lira would enable transparent fees, enforceable conditions, and a path for lawful AI training that respects creators.

What to watch next

  • Meta's response to the demand and whether talks begin on a lawful licensing framework.
  • Potential restrictions on offering AI models trained on unlicensed data in Europe.
  • Further action by other collecting societies and publishers across the EU.

Context on Llama

Meta develops the Llama family of large language models. Understanding how these models are built and licensed can help you assess risk and opportunity: Meta's Llama overview.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)