AI-Generated Songs Hijack Dead Artists' Spotify Pages

AI-made songs are hitting deceased artists' Spotify pages, then getting pulled for deception. Creatives are urged to audit catalogs as fans push for clear labels and filters.

Categorized in: AI News Creatives
Published on: Sep 22, 2025
AI-Generated Songs Hijack Dead Artists' Spotify Pages

AI-generated tracks are appearing on deceased artists' Spotify pages. Here's what creatives need to know

Bands like The Velvet Sundown and TaTa are fully AI-built. That alone sparked debate. Now, AI songs have shown up on the pages of deceased artists on Spotify - a step too far for many fans and rights holders.

What happened

A new track called "Together" appeared on Blaze Foley's Spotify page before getting pulled. Spotify said it violated "deceptive content policies," which prohibit impersonation intended to mislead. The song carried the markers of a Foley-style country track and used an AI-generated image that didn't even look like him.

Craig McDonald of Lost Art Records didn't mince words: "It's kind of an AI schlock bot… that whole posting has the authenticity of an algorithm." Guy Clark's page faced similar treatment with AI-generated music and imagery.

Spotify's stance

Spotify says it will "take action against licensors and distributors who fail to police for this kind of fraud," including permanent removal for repeat offenders. Enforcement is happening after the fact. There's no AI tag on tracks or artist pages today.

Fans are noticing. Some report AI songs slipping into Discover Weekly and are calling for an AI filter. The platform has not rolled out a labeling system.

Spotify's Platform Rules spell out the policy on deceptive content, but don't specify a public-facing label for AI use.

The industry pressure

There's no legal requirement for Spotify to identify AI-generated music. Still, calls for transparency are growing. "We're calling on the UK government to protect copyright and introduce new transparency obligations for AI companies… as well as calling for the clear labelling of content solely generated by AI," said Sophie Jones, Chief Strategy Officer at the British Phonographic Industry.

Expect more pressure from industry bodies and policy discussions about attribution, licensing, and disclosure. BPI has been vocal on this front.

Why it matters for creatives

AI mimicry blurs authorship and corrodes trust. Catalog integrity becomes a moving target. If platforms don't proactively label and gate, your artist identity and royalty streams are at risk - especially for legacy catalogs where estates aren't watching day to day.

Immediate actions for artists, labels, and estates

  • Audit your catalog weekly. Check artist pages, recent releases, and imagery for impersonations.
  • Lock down metadata. Maintain accurate ISRCs, credits, label info, and verified distributor relationships.
  • Update contracts. Add AI-impersonation, synthetic likeness, and "voice/style" provisions with clear takedown rights.
  • Create a fast lane for takedowns. Document evidence, reference policy language, and keep a template ready for platforms and distributors.
  • Watermark originals where feasible and keep stems organized. Proof helps in disputes.
  • Activate your community. Pin a note on official channels about how to spot/report impersonations.
  • Coordinate with estates. Assign monitoring, define response windows, and centralize rights contacts.

What fans want (and what you can advocate)

  • Clear AI labels on tracks, artists, and artwork.
  • Optional filters to exclude AI-generated music from recommendations and playlists.
  • Visible provenance: who performed, produced, and which tools were used.

Push for platform-level tags and filters. Encourage fans to report impersonations and support verified channels.

What to watch next

  • Platform policy shifts: labels, filters, and proactive detection.
  • Licensing standards: consent, compensation, and disclosure for training data and voice/style.
  • Regulatory moves: transparency requirements and enforcement mechanisms.

Until transparency becomes default, assume policing falls on you. Treat AI impersonation like spam: fast detection, faster removal, and constant education for your audience.

Level up your AI literacy

If you're building in public or managing catalogs, keep your skills current. Practical training helps you spot synthetic content, set policies, and adopt AI ethically in your own workflow.

Browse AI courses by job role to set standards for your team and stay ahead of the next wave of copycats.