Jared Harris Pushes Back on AI Deepfake Use Without Consent
Jared Harris, known for The Crown and Chernobyl, has asked his lawyers to step in after a new podcast used an AI-generated version of his likeness in promotional materials without his approval. The show, Films Not Made, uses AI to "resurrect" old Hollywood pitches with new trailers and pitch decks.
In his words: "This is a perfect example of the concern 'creatives' like myself have over the misuse of AI, namely the unauthorized use of one's image, voice, work or likeness without prior consultation where the intention is to generate an income stream." He called the podcast a commercial venture set up to benefit its producers.
What happened
Films Not Made is hosted by producer Amy Hobby and director Avi Zev Weider. A promo clip featured an AI-generated Jared Harris; he says neither he nor his reps were contacted and he did not consent to the use.
Harris stated, "As artists, we view the coming wave of AI generated content with curiosity and suspicion. Frankly, the suspicion rests entirely with the human element that wields it, not the technology itself." He instructed his lawyers to issue a cease and desist and to remove his image from the trailer and artwork; the show's team said the edit was made.
The hosts explained their choice: Hobby cited a prior professional encounter with Harris. Weider argued that "We're all breathing in the same air now… These tools exist… we've been swept into something none of us consented to. The question is what you do once you're breathing it in."
Why this matters for creatives
Consent isn't a courtesy. It's the floor. Using a person's likeness or voice to promote or monetize work without sign-off can trigger right of publicity claims and breach contract provisions many artists already have in place.
If you work with performers, authors, illustrators, or musicians, AI doesn't erase your obligations-it adds new ones. Understand your local laws on likeness and voice rights, union rules, and the conditions under which AI outputs can be used commercially. A quick primer on the right of publicity is here: Cornell Law: Right of Publicity.
A practical checklist before you use AI likeness or voice
- Get written consent before generating or publishing any AI approximation of a person's image or voice. "Implied" consent isn't consent.
- Spell out scope: media, territories, duration, exclusivity, and whether fine-tuning or retraining on their data is allowed.
- Compensate fairly. Add residuals or usage-based fees for ongoing campaigns or retrained models.
- Offer a kill switch. Include takedown and revocation terms if scope creeps or new use cases emerge.
- Track provenance. Document models, prompts, seeds, fine-tunes, and datasets used. Keep an audit trail.
- Label synthetic media. Add clear disclosures in promos; use content credentials where possible.
- Mind union and guild rules. If working with members, align with current AI provisions. See SAG-AFTRA AI resources.
- Avoid lookalike or soundalike traps. "Transformative" isn't a shield if the goal is to evoke a specific person for commercial gain.
- Watch jurisdictional quirks. Some states extend protections to voice; some have postmortem rights; some don't.
- Use ethical defaults. No scraping private footage or voice memos, and no training on materials you don't have rights to.
- Insure the risk. Consider media liability coverage that addresses AI likeness and voice issues.
- Run a preflight review. Legal, comms, and the talent (or their reps) sign off before anything goes live.
Protect your own image and voice
- Update your contracts: prohibit unauthorized training or generation using your face, body, or voice; require consent and compensation for any synthetic reproductions.
- Centralize approvals: route all licensing, endorsements, and AI requests through your rep or a single inbox visible on your site and profiles.
- Use content credentials and watermarks on released media to signal authenticity and support claims.
- Set up monitoring: track promos, trailers, and ads featuring your name or image; prepare a takedown template for fast response.
- If something crosses the line, act quickly: preserve evidence, send a cease and desist, and escalate if usage continues.
Key takeaway
The tech isn't the villain. The shortcuts are. Ask first, pay fairly, document everything, and mark synthetic outputs as synthetic.
That approach doesn't kill creativity-it protects it. And it keeps you out of emails from lawyers like the one Harris just sent.
Level up your AI practice without crossing lines
If you want practical training that respects creative rights and keeps your workflows clean, start here: AI for Creatives. If voice work is part of your portfolio, brush up on safe tools and consent-first methods: Voice Modulation.
Jared Harris' position, in full spirit
"My control over the use of my image is protected by contract and settled State and Federal law… I have not ceded that right through any prior professional association or relationship." That's not anti-AI. That's pro-consent.
Use that standard as your baseline. Your future self will thank you.
Your membership also unlocks: