Denmark moves to ban AI deepfakes that mimic faces and voices

Denmark plans to ban sharing deepfakes that copy a person's face or voice without consent, pulling likeness rights into copyright. Lawmakers expect a vote early next year.

Categorized in: AI News Legal
Published on: Nov 07, 2025
Denmark moves to ban AI deepfakes that mimic faces and voices

Denmark moves to ban sharing of AI deepfakes that imitate a person's appearance or voice

Denmark is set to tighten rules on synthetic media. A proposed bill would amend copyright law to ban the sharing of deepfakes that mimic a person's personal characteristics-such as their face or voice-without consent. Lawmakers expect a vote early next year.

For legal teams, this creates a clear compliance line: distributing AI-generated impersonations without consent could become illegal content in Denmark. It also pulls likeness and voice into a rights framework traditionally reserved for creative works.

What the bill targets

  • Distribution of AI-generated content that imitates a real person's appearance or voice without their consent.
  • Protection of "personal characteristics" anchored in copyright law rather than a separate image-right statute.
  • Objective: deter the creation and spread of harmful impersonations and give individuals a faster path to removal and redress.

Who's affected

This touches creators, platforms, advertisers, production studios, and newsrooms. It also matters to individuals who rely on public likeness-streamers, voice actors, journalists, and public figures-as well as private citizens who want tighter control of their image.

For example, Danish video game live-streamers like Marie Watson face heightened impersonation risks from voice cloning and face-swap tools. The bill is built to address that kind of unauthorized mimicry before it scales.

Key intersections for counsel

  • Data protection: Faces and voices can be biometric data. Consent isn't a formality; it must be explicit and provable. See GDPR guidance from the European Commission for baselines and lawful use cases. EU GDPR overview
  • Platform duties: If deepfakes become illegal content in Denmark, platform obligations under the EU's Digital Services Act (notice, action, transparency) come into play. Digital Services Act
  • Defamation/harassment: Harmful deepfakes often trigger multiple causes of action. Map overlaps to streamline remediation and enforcement.
  • Speech protections: Expect debates on satire, reporting, research, and public-interest uses. Watch for explicit exemptions or tests in the final text.

Enforcement and risk

Once enacted, hosting or distributing unauthorized impersonations could invite takedown demands, injunctions, and damages claims. Platforms may face accelerated timelines to act on notices originating from Denmark, with DSA reporting and transparency layers attached.

Compliance checklist

  • Policy: Explicitly prohibit synthetic impersonations of identifiable individuals without documented consent. Call out face swaps and voice clones by name.
  • Consent: Implement capture and retention for likeness/voice permissions. Store originals, timestamps, and scope (territory, duration, revocation).
  • Takedown: Build a deepfake-specific triage path. Prioritize sexualized content and minors. Track DSA deadlines and produce audit logs.
  • Labeling: Where consented synthetic media is allowed, require visible labeling and embed provenance/watermarks in the file.
  • Contracts: Add warranties, representations, and indemnities covering synthetic media. Include geo-blocking rights for Denmark on noncompliant content.
  • Vendors: Conduct due diligence on AI tools and production partners. Ban training or inference on customer assets without written approval.
  • Training: Equip moderators, creators, and legal ops with recognition cues and escalation playbooks.

Cross-border considerations

Distribution from outside Denmark doesn't avoid risk if content is accessible inside the country. Update choice-of-law, notice, and venue clauses. Prepare geo-blocking and country-specific workflows when consent cannot be validated.

Open questions to monitor

  • Definition: What qualifies as a "deepfake" versus benign editing? Where is the threshold?
  • Exceptions: Will satire, news reporting, research, or public-interest uses be explicitly carved out?
  • Liability: How responsibility splits among creators, uploaders, hosts, and tool providers.
  • Mental state: Will "knowing" distribution be required, or will strict liability apply for certain content types?
  • Remedies: Statutory damages, expedited injunctions, takedown timelines, and repeat-infringer rules.

30/60/90-day plan for in-house counsel

  • 30 days: Gap-assess policies, consent practices, and notice-and-action flows. Stand up a rapid review squad.
  • 60 days: Update T&Cs, creator/vendor agreements, and content guidelines. Add Denmark-specific clauses and enforcement levers.
  • 90 days: Run tabletop exercises, measure takedown speed, and validate consent storage and audit trails.

Practical takeaway

Treat unauthorized likeness and voice clones as high-risk assets. If consent isn't clear, don't publish, don't boost, and don't monetize-especially in Denmark. Build consent, provenance, and takedown muscle now, before the law lands.

Want structured upskilling for legal and compliance teams working with AI? See curated training by job role here: Complete AI Training - Courses by Job


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide