Aspyr Issues Hotfix to Remove AI Voiceovers from Tomb Raider Remastered amid Legal Dispute
Aspyr pulled AI voiceovers from Tomb Raider Remastered after backlash and legal action by Françoise Cadol. Key issues: consent, likeness rights, contracts, and remaster risk.

Aspyr removes AI voiceovers from Tomb Raider Remastered: key legal implications for counsel
Aspyr Media released a hotfix for Tomb Raider Remastered that removes all AI-generated voice content after backlash and pending legal action over unauthorized audio use. The studio apologized and clarified that other improvements from the prior patch remain intact.
In its statement, the developer said: "We've addressed this issue by removing all AI voiceover content, while retaining the improvements made in the previous update. We apologize for any inconvenience this may have caused. Please reach out to our customer support site with any issues."
The dispute escalated when French actress Françoise Cadol-Lara Croft's French voice since 1996-announced legal action, alleging the remaster of Tomb Raider: The Last Revelation, Tomb Raider Chronicles, and Tomb Raider: The Angel of Darkness used an AI-generated voice modeled on her performance without consent. "This vaguely resembles mine, but it is not me," Cadol said, calling the practice "pure theft" for "clear and deliberate commercial purposes."
The claim asserts Aspyr replaced her original recordings with AI without permission. Cadol's prominence matters for damages: she is widely known as Lara Croft's French voice and served as Angelina Jolie's official French voice during the film adaptations, alongside extensive dubbing work for Sandra Bullock, Tilda Swinton, and Patricia Arquette.
Legal issues in play
- Voice likeness and publicity/personality rights: Using a performer's distinctive voice or a close imitation for commercial purposes without consent is a high-risk move in many jurisdictions.
- Performers' neighboring rights and moral rights (France/EU): Performers typically control fixation and use of their performances; moral rights and neighboring rights may be implicated by substitution or alteration of a performance.
- Copyright in original recordings: Replacing original tracks with AI outputs may raise questions about the scope of licenses, alteration rights, and removal of attribution.
- Contract scope and legacy licenses: Key questions include whether prior agreements permit synthetic replication, derivative voice models, or substitution in remasters.
- Consumer protection/unfair competition: Misleading consumers about the origin of a performance can trigger statutory and regulatory scrutiny.
- Privacy/biometric angles: Voice cloning can touch biometric or personal data laws in certain regions (for example, Illinois' Biometric Information Privacy Act regulates voiceprints).
- AI development practices: Discovery may probe training sources, prompts, similarity metrics, and whether the model targeted the performer's identity.
- Jurisdiction and choice of law: Cross-border facts (French performer, international distribution, US publisher) raise forum, applicable law, and enforcement considerations.
Why the hotfix matters
Pulling the AI voiceovers is a mitigation step, not a legal shield. It may reduce ongoing harm, but it doesn't cure past use or potential misappropriation. It can influence injunction and damages arguments, while preserving evidence remains critical-internal teams should avoid altering logs, assets, or training data without legal guidance.
The move also signals how fast reputational and legal risk can force content changes at scale-an important precedent for publishers deploying synthetic media.
Practical steps for studios and counsel
- Consent-first policy: Obtain written, specific consent for any voice cloning, imitation, or TTS modeled on a performer. Prohibit unapproved "soundalike" uses.
- Contract language: Add explicit clauses on synthetic performance rights, data usage, training, derivatives, alteration/substitution, attribution, and revocation. Include audit and approval rights for remasters.
- Record provenance: Maintain a chain of custody for training data, prompts, models, and outputs. Log who approved what, and when.
- Vendor governance: Require warranties, detailed disclosures, and indemnities from AI vendors; mandate human-in-the-loop review for any likeness-adjacent output.
- Labeling and disclosures: Where synthetic voices are used with consent, disclose clearly. Track regional rules on deepfake/transparency obligations.
- Risk screening: Pre-release reviews for likeness risks (voice, face, signature styles). Escalate any "close imitation" to legal.
- Incident response: Playbook for rapid rollback, public statements, and stakeholder comms. Preserve evidence immediately.
- Jurisdictional mapping: Maintain a matrix of likeness, performer, and biometric laws in key markets, plus union/collective bargaining constraints.
What to watch next
Expect litigation to focus on consent, contractual scope, and whether the AI output constitutes a protected imitation of Cadol's identity and performance. A settlement is plausible, but the case could set guidance on synthetic performance rights in remasters.
For publishers, this is a signal event: synthetic media workflows need legal architecture from the start. Treat voice and likeness as protected assets, not just data inputs.
Further reading: WIPO: Performers' rights | Illinois BIPA (biometric law)