"It was clearly not me": Shaun Rein's voice cloned with AI to read his book on a "deepfake" podcast
Seven YouTube videos. More than 200,000 streams in a week. And an author hearing "his" voice read chapters from his book - without his consent.
Shaun Rein, founder of the China Market Research Group and author of The Split (John Murray Press, 2024), discovered podcast-style videos that used an AI clone of his voice to narrate content taken from his book. He says the mimicry was so accurate that he only realized it was fake when "he" switched to Mandarin - a language he rarely records in.
What happened
The videos appeared on a channel called The US-China Narrative, which seems to be based in Singapore. Rein believes creators scraped his publicly available interviews to build a voice model and then used his written text - "95%" of it his words - to script the episodes.
His name, image, and ideas drove the content. The channel reaped the views. He, his agent, and his publisher were not asked, credited, or paid.
Why this matters to writers
Audio is a new front for unauthorized use. If your voice is online, it can be cloned. If your text is published, it can be fed to a model and reassembled as "you."
The financial risk is obvious (lost audiobook and licensing revenue). The reputational risk is bigger. As Rein's agent warned, the next iteration could put words in your mouth that you never said - and damage takes time to undo.
What platforms and publishers are doing
Rein's publisher condemned the infringement and said they have ongoing anti-piracy processes with YouTube. YouTube says its existing policies apply to AI-generated content and rights holders can file takedowns.
If you own the rights, you can submit a copyright removal request. For broader guidance and advocacy, the Society of Authors has been vocal about generative AI's impact on creators.
Practical steps writers can take now
- Scan for unauthorized use weekly: Search your name + "audiobook," "podcast," and chapter titles on YouTube, Spotify, Apple Podcasts, and TikTok. Set Google Alerts and YouTube channel keyword alerts.
- File fast, document everything: Screenshot the video, description, channel name, timestamps, and view count. File a takedown on the platform. Repeat for mirrors and clips.
- Claim your audio rights and release an official feed: If you hold audio rights, publish a short, legitimate sample (even a 5-minute "chapter zero"). Owning a verified podcast or YouTube channel makes impersonations easier to flag.
- Negotiate AI clauses in your contracts: Specify: no training, cloning, or synthetic use of your voice/likeness/text without explicit, written permission and payment. Define penalties and removal timelines.
- Use distinct tells in your official audio: A consistent intro line, specific background bed, or audible watermark can help fans (and platforms) verify the real thing.
- Coordinate with your agent/publisher: Decide who files takedowns by territory and format. Keep a shared log of links, filings, and outcomes to avoid duplicated effort.
- Prepare a rapid comms note: Draft a short post: "Deepfake content is circulating. Here's where to find my official work." Publish it if needed and pin it.
- Monitor impersonation accounts: Report fake social profiles early. Consistent naming across your channels reduces confusion.
- Consider anti-piracy tools: Services that fingerprint audio/video can save time if your work gets targeted repeatedly.
The uncomfortable truth Rein's case exposes
There's demand for author-led audio - and bad actors are filling the gap. One of Rein's deepfake videos hit 70,000 views in 24 hours. That's a signal.
If you don't publish in the formats readers want, someone else might - with or without consent. The fix isn't only enforcement. It's also building official channels readers can find and trust.
Make your work harder to steal - and easier to support
- Offer the format: Short podcast summaries per chapter, Q&A episodes, or an author commentary track can satisfy demand without cannibalizing your audiobook.
- Own your distribution: Your site, your newsletter, your verified channels. Link them everywhere.
- Be explicit: State where your official audio lives and how to verify it. Readers will help you police fakes if you make it clear.
What Rein wants
He wants platforms to do more to label or remove AI deepfakes, and for creators to keep control of their voices and words. He also worries about the next step: malicious content that could damage credibility - not just sales.
The message to writers is simple: protect your IP, move faster on audio, and have a plan for takedowns and public statements. That mix of offense and defense is the new baseline.
Level up your AI literacy
If you want practical training to stay ahead of AI's impact on your work and rights, browse curated options here: Latest AI courses.
Your membership also unlocks: