Korean newspapers push back: scrap AI training exemption, require transparency and fair pay

South Korea's newspaper industry urges a rethink of a plan letting AI train on copyrighted works first, pay later. They seek consent, transparency, and fair compensation.

Categorized in: AI News Government
Published on: Jan 06, 2026
Korean newspapers push back: scrap AI training exemption, require transparency and fair pay

Newspaper Industry Calls for Full Rethink of Korea's AI Training Copyright Exemption

Published: 2026.01.06. 00:52
Updated: 2026.01.06. 01:21

South Korea's newspaper sector is urging a full review of the government's proposed AI copyright plan. The draft would let AI companies train on copyrighted works without prior permission and compensate afterward. The Korean Association of Newspapers says this "use first, compensate later" approach infringes core rights and creates enforcement gaps.

What's on the table

The National AI Strategy Committee's AI Action Plan recommends revising laws so AI models can learn from copyrighted content without legal uncertainty. The government is weighing a legal change that allows training first, then paying later-without prior consent from rights holders. News publishers are pushing back, warning the policy would shift economic value away from creators.

Why this matters for government

This decision sets precedents for data access, creator compensation, and platform accountability. It also affects market structure: who controls training data and under what terms. Clear rules will determine whether innovation and rights protection can move together or collide.

Industry's core objections

  • It removes the right to consent and forces unilateral trade-offs on creators.
  • Verification is weak: it is hard to confirm which works were used, how much, and in which models.
  • For news, model training may substitute for original content consumption, eroding the market for the source.
  • The approach risks advantaging large platforms and entrenching data monopolies.

What publishers are asking for

  • Withdraw the AI copyright exemption clause.
  • Legislate transparency on training data sources and usage.
  • Build a fair compensation and enforcement system with verifiable reporting.

Global context to consider

International practice is mixed. The EU's copyright directive allows text-and-data mining but lets rights holders opt out, a guardrail for commercial use.

Publishers argue that no country has enacted a blanket statutory exemption for AI training itself. Policymakers may wish to assess alignment with these models and the opt-out mechanisms they provide.

Policy options for a balanced approach

  • Transparency: mandatory registries of training datasets, sources, and model versions.
  • Consent pathways: opt-in by default for news, or at minimum a clear, enforceable opt-out.
  • Audits: independent audits to verify training data claims and usage logs.
  • Licensing: collective licensing frameworks to reduce transaction costs while preserving rights.
  • SME protections: fee caps, dispute support, and simplified claims for smaller outlets.
  • Attribution signals: standardized metadata and honor-by-default requirements for crawl and use.
  • Dispute resolution: fast-track mechanisms with statutory penalties for non-compliance.
  • Pilots: time-limited pilots with reporting, before broad legal changes.

Risks if "use first, compensate later" moves ahead

  • Underpayment due to unverifiable usage and asymmetric information.
  • Market substitution for news content, weakening original reporting incentives.
  • Data concentration among a few platforms, raising entry barriers and public trust issues.

What to watch next

  • Draft bill text and the exact scope of the exemption.
  • Whether the plan includes an enforceable data transparency rule set.
  • The compensation method: rates, reporting, audit rights, and penalties.
  • Stakeholder hearings and regulatory impact assessments.

Practical steps for agencies now

  • Map stakeholders: publishers, wire services, local outlets, platforms, model developers, academia.
  • Define measurable safeguards: dataset registries, logging standards, and opt-out enforcement.
  • Pressure-test compensation models using real usage data and independent audits.
  • Align with international norms to reduce future trade and compliance friction.

If your team needs structured upskilling on AI policy, governance, and risk, see focused learning paths by role at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide