UK copyright is unfit to protect creative workers from AI: what needs to change
The UK government is weighing four routes for AI and copyright, with a report and impact assessment due on 18 March 2026. The options: keep the law as is; require licences in all cases; allow a broad data mining exception; or permit a limited exception with a right for creators to reserve their rights plus stronger transparency rules.
On paper, that looks balanced. In practice, today's copyright markets skew toward big rights owners and big tech, not individual creatives. Without structural fixes, any AI reform risks reinforcing that imbalance.
What's actually at stake
AI developers argue they need vast, diverse datasets to build general-purpose models. Some claim training on copyrighted work is a non-expressive, transformative use and should qualify under "fair use/dealing"-style logic. Others say licensing at the required scale is impossible.
Creators counter that their work has been scraped without consent, transparency is poor, and there are no practical controls or remedies. In the UK consultation that closed in February 2025, only 3% supported an opt-out approach. 95% wanted stronger rights, mandatory licensing, or no legal change.
The government's current direction
After backlash to opt-out, ministers signalled interest in a licence-first approach. The challenge: not tanking UK competitiveness while giving creators real control and pay. Officials also acknowledge models trained elsewhere may still reach the UK market if the rules here get too strict.
The legal grey: training vs infringement
UK law allows "transient copies" when they're temporary, technically necessary, and have no separate economic value. Training often retains tiny fragments, which complicates infringement claims. But models can "memorise" copyrighted works, which crosses the line.
Current "fair dealing" exceptions cover things like research and review, plus a 2014 exception for text and data mining for non-commercial research. Commercial AI training sits in a contested zone, which is why clarity is overdue. See UK IPO guidance on exceptions to copyright: gov.uk/guidance/exceptions-to-copyright.
Why copyright alone won't save creatives
For decades, expanding copyright has delivered outsized gains to big intermediaries, not individual artists. Authors and analysts have shown how profits concentrated while creator incomes lagged. A blanket "more copyright" fix will likely repeat that pattern in AI.
Even if licences are required, only a few AI firms can afford them at scale. That tilts the market toward incumbents, locking out smaller players and leaving individual creators bargaining with giants.
The labour reality: AI is used to cut costs
AI is deployed to reduce headcount and speed throughput. Early evidence matters: Harvard Business Review reported drops in writing jobs (~30%), coding (~20%), and image creation (~17%) after the rise of major generative tools. That means creators end up competing with models trained on their work.
You can fix consent and pay, and still lose the market if buyers switch to cheaper AI outputs. That's a labour and competition problem, not just an IP problem.
What a workable fix looks like
Policy principles that protect creators and keep innovation honest
- Licence-first with a true right to say no: Default to permission, no backdoor opt-outs. Offer collective mechanisms for small creators so negotiation isn't one-sided.
- Real transparency: Require model training logs, dataset summaries, and reproducible audit trails. Keep trade secrets safe, but reveal enough for rights checks and redress.
- Memorisation safeguards: Testing, thresholds, and remediation if a model reproduces protected works. Penalties that scale with reach and revenue.
- Collective bargaining + extended collective licensing (ECL): Enable unions and collecting societies to negotiate sector-wide terms, with opt-outs for creators who want individual deals.
- Competition and antitrust enforcement: Block exclusive data deals that foreclose markets. Scrutinise mergers that concentrate datasets, distribution, or model access.
- Displacement and training funds: A levy on high-revenue AI uses to fund re-skilling, minimum guarantees, and transitional support for affected creative roles.
- Interoperable standards: Machine-readable "TDM-reservation" signals, content credentials, and provenance tags that survive the supply chain.
- Research carve-outs with guardrails: Preserve non-commercial research exceptions while preventing leakage into commercial models without consent.
Practical steps for creatives and legal teams now
- Contractual clauses: Add explicit "no AI training" terms and provenance obligations in publishing, licensing, and platform agreements. Require model-disclosure and indemnities when feasible.
- Reserve your rights technically: Use robots.txt and TDM-reservation headers where supported. Embed content credentials (e.g., C2PA) and keep verifiable originals.
- Register and monitor: Register works, watermark responsibly, and document release dates. Track suspicious outputs and keep evidence for takedowns or claims.
- Join collective efforts: Unions, guilds, and collecting societies strengthen bargaining power and reduce per-artist legal overhead.
- Price for substitution risk: When licensing, distinguish "reference use" from "training use." Charge more for uses that create direct substitutes for your work.
- Data minimisation in collaborations: Share only what's necessary. Use clean rooms or limited excerpts instead of full archives where possible.
- Internal AI policy (for studios/agencies): Set rules on when AI can assist, how human review works, and what data is off-limits. Keep an audit trail.
Where industry arguments currently land
- Tech says: broad licensing is unworkable; transparency reveals trade secrets; stricter UK rules risk offshoring models and shrinking access.
- Creatives say: consent and pay are non-negotiable; opt-out flips the burden; without enforcement and collective tools, "licensing" is theatre.
- Courts will test: memorisation, substantial similarity, and market harm. Expect case-by-case outcomes until legislation firms up the rules.
What to watch before 18 March 2026
- The licensing model: Does the government back licence-first with enforceable transparency? Or carve broad exceptions that weaken bargaining power?
- Displacement mechanisms: Any levy, fund, or mandatory negotiation step before deploying AI that replaces creative roles?
- Collective options: Will creators get ECL pathways and union-backed templates that set floor terms across sectors?
- Cross-border reality: How the UK handles foreign-trained models and avoids a loophole while staying compatible with EU/US norms.
Bottom line
If reforms stop at copyright, individual creators lose. The fix is a package: licence-first consent, real transparency, collective bargaining, antitrust enforcement, and worker protections. Anything less invites extraction on one side and consolidation on the other.
For additional perspective on AI, copyright, and user rights, see the Electronic Frontier Foundation's AI resources: eff.org/issues/ai.
If you're updating team skills around AI policy, auditing, or safe deployment, explore curated learning paths by role here: Complete AI Training - courses by job.
Your membership also unlocks: