Spotify launches AI crackdown to protect artists, removes 75 million spam tracks
Spotify adds tougher anti-impersonation rules, a platform-wide spam filter, and AI disclosures in credits. Goal: curb abuse, cut spam, and protect artists' visibility and pay.

Spotify tightens AI protections for artists and producers
Spotify is rolling out new safeguards to protect creatives from AI abuse: tougher impersonation rules, a platform-wide spam filter, and AI disclosures baked into industry-standard credits. The company says it has already removed 75 million spammy tracks and is stepping up enforcement.
Spotify's stance is clear: AI can help make and discover music, but it can also flood platforms with slop, confuse listeners, and siphon royalties. The new measures aim to curb abuse without blocking artists who choose to use AI responsibly.
Stronger rules and enforcement on impersonation
Spotify has introduced an impersonation policy that addresses AI voice clones and other unauthorized vocal replicas. Vocal impersonation is only allowed when the impersonated artist has authorized it, giving musicians clearer protections and recourse.
The company is also investing to stop a common attack: fraudulent uploads delivered to the wrong artist profiles across streaming services. Spotify is testing new prevention tactics with leading distributors to block these uploads at the source.
New spam filter to reduce mass slop
Expect a crackdown on mass uploads, duplicates, SEO hacks, and artificially short tracks. Spotify's new system will tag uploaders and tracks using these tactics and stop recommending them, with safeguards to avoid penalizing legitimate creators.
The filter will roll out over the coming months. The goal: improve listener experience and protect real artists' visibility and royalties.
AI disclosures with industry-standard credits
Spotify says AI use is a spectrum. Instead of forcing "AI or not AI" labels, it will support detailed AI disclosures within music credits, developed through the Digital Data Exchange standards process. These disclosures will be visible across the app.
This gives listeners context and helps artists stay in control of how AI shows up in their work.
Why this matters
Undeclared AI acts have already slipped through and racked up significant streams before being exposed. Labels and fans have pushed for stricter protections, and industry voices have welcomed measures like content filtering, infringement checks, penalties for repeat offenders, chain-of-custody certification, and name-and-likeness verification.
Bottom line: less noise, more trust, and a fairer shot for working artists.
What to do next as a creative
- Ask your distributor about anti-impersonation checks, name-and-likeness verification, chain-of-custody certification, and penalty systems for repeat infringers.
- Claim and secure your artist profiles across major platforms. Monitor for unauthorized uploads and report impersonation quickly.
- Keep clean credits: collaborators, stems, and any AI tools used. Be ready to include AI disclosures in your metadata.
- Avoid spammy tactics: mass near-duplicates, keyword-stuffed titles, or artificially short tracks. These will get demoted.
- Use available rights-management and takedown tools. Document everything to speed up enforcement if needed.
Further reading
Build your AI fluency without risking your catalog
If you're integrating AI into your workflow, it pays to stay sharp on best practices and compliance. Explore practical programs by creative role at Complete AI Training.