Voice, Likeness, and Fair Use in the Age of AI
Artificial intelligence is increasingly intersecting with human identity through the replication of voice and likeness. This raises complex legal questions about ownership, consent, and protection in today’s digital environment. High-profile cases involving AI-generated voices resembling celebrities such as Taylor Swift, Tom Hanks, and Scarlett Johansson have spotlighted these unresolved issues.
In May 2024, Scarlett Johansson publicly objected to OpenAI’s use of a voice for its ChatGPT assistant “Sky,” alleging it closely resembled her own. The situation echoed the 2013 film Her, where Johansson voiced a fictional AI. A social media post by OpenAI’s CEO referencing “her” around the release of “Sky” intensified public scrutiny. OpenAI paused the use of “Sky” and clarified that the voice actor was not intended to imitate Johansson. This incident highlights the legal uncertainty around AI-generated content that mimics real people, raising the key question: when does AI use cross from lawful expression into unlawful appropriation?
Copyright Law
Copyright protects original works fixed in a tangible medium, but current U.S. law offers limited protection for voice and likeness. The D.C. Circuit Court of Appeals reaffirmed in Thaler v. Perlmutter that copyright requires human authorship. Works created solely by AI without meaningful human input are not eligible for protection. This aligns with U.S. Copyright Office guidance emphasizing human creativity as a requirement.
Voice and likeness alone typically are not protectable under copyright unless embedded within a larger copyrighted work, such as a film or song. Even then, only that particular instance is protected, not the voice or likeness itself. This restricts individuals’ ability to assert copyright claims against unauthorized AI-generated imitations of their voices or appearances.
The fair use doctrine allows limited use of copyrighted materials without permission for purposes like commentary, criticism, or parody. Courts evaluate fair use by considering:
- The purpose and character of the use (commercial or nonprofit educational);
- The nature of the copyrighted work;
- The amount and substantiality of the portion used;
- The effect on the potential market or value of the copyrighted work.
Some AI-generated works that imitate real individuals may fall under fair use. However, courts have yet to clarify how fair use applies to synthetic media, especially when dealing with non-copyrightable elements like voice and likeness. This creates tension between protecting individual identity and preserving free expression under the First Amendment.
Right of Publicity
The right of publicity protects against unauthorized commercial use of identity elements like name, likeness, and sometimes voice. This right varies by state, as there is no comprehensive federal statute. Successful claims usually require proof that the identity is identifiable, the use was unauthorized, it was commercial, and monetary harm ensued.
Cases such as Midler v. Ford Motor Co. and Waits v. Frito-Lay show that distinctive voice imitations can be actionable, particularly in advertising. However, these protections mostly benefit well-known figures, leaving others with limited recourse.
Recent Legislative Developments
While federal legislation remains limited, states have stepped up to regulate AI-generated impersonations. All states recognize some common law protections for unauthorized use of name, image, or likeness, and about 30 states have statutory rights of publicity and privacy laws. States like California, Tennessee, and Washington explicitly regulate digital replicas under these laws.
In 2025, 13 states introduced new legislation targeting digital replicas. California’s AB 1836 extends post-mortem publicity rights to AI-generated replicas, and AB 2602 requires informed consent for digital replicas in entertainment contracts. Arkansas amended the Frank Broyles Publicity Rights Protection Act to include AI-generated voice and likeness protections.
This patchwork of state laws can create compliance challenges for developers, platforms, and rights holders due to inconsistent standards across jurisdictions.
Conclusion
The legal landscape surrounding AI-generated voice and likeness rights is still unsettled. Existing doctrines—copyright, fair use, and right of publicity—offer some tools but may not fully address synthetic media challenges. Courts and lawmakers must balance protecting personal identity with safeguarding free expression and technological innovation.
A clear and consistent legal framework targeting liability primarily at bad actors who intentionally misuse digital replicas, rather than the tools enabling their creation, is essential. This approach can help protect individual rights while supporting responsible AI development.
Your membership also unlocks: