Fashion and beauty brands face new consent and disclosure rules as AI use of model likenesses expands

Fashion and beauty brands using AI to generate models, replicas, or campaign imagery now face binding laws in New York, Tennessee, and elsewhere. Contracts, consent requirements, and disclosure rules have tightened significantly since 2025.

Categorized in: AI News Legal
Published on: May 02, 2026
Fashion and beauty brands face new consent and disclosure rules as AI use of model likenesses expands

AI in Fashion and Beauty Now Faces Real Legal Constraints

Artificial intelligence has moved from experimental tool to production workhorse in fashion, apparel, and beauty companies. Brands now use AI to generate designs, create virtual try-on experiences, build campaigns around synthetic models, and even resurrect deceased fashion icons. But lawmakers have caught up, and the legal patchwork is tightening fast.

The appeal for brands is straightforward: AI cuts costs, accelerates creative cycles, and enables personalized shopping at scale. Generative systems produce new imagery in hours instead of days. The problem is equally clear: these systems often rely on real people's images, bodies, voices, and performances-sometimes without clear consent or appropriate compensation.

New York Sets the Standard for Model Protections

New York has become the primary jurisdiction for AI-and-model legislation. Three laws now directly govern how brands can use AI with fashion and beauty talent.

The Fashion Workers Act, effective June 19, 2025, requires explicit consent for AI use involving models. That consent must specify scope, purpose, duration, and compensation. Routine retouching is excluded, but anything beyond that needs a signed agreement.

The Digital Replica Law, effective January 1, 2025, voids digital replica agreements that (1) allow a replica to substitute for in-person work the person would have performed, (2) lack a reasonably specific description of intended use, or (3) were negotiated without the individual having legal counsel or union representation. Brands can no longer rely on vague language to lock up a model's digital double indefinitely.

The AI Transparency in Advertising and Synthetic Performer Disclosure Law, effective June 9, 2026, shifts focus to consumers. Any commercial ad using an AI-generated synthetic performer must clearly disclose that fact. Failure to label triggers civil penalties.

A separate posthumous right of publicity law requires consent from heirs or executors before using a deceased person's name, image, voice, or likeness in campaigns. This directly affects launches built around "resurrected" fashion or beauty icons.

Other States and Federal Proposals Are Moving in Parallel

Tennessee's ELVIS Act and Arkansas's HB 1071 explicitly extend publicity rights to AI-generated likenesses and voices, banning unauthorized commercial use and imposing civil and criminal penalties.

At the federal level, the proposed NO FAKES Act would create a nationwide right against unauthorized digital replicas of a person's likeness, voice, or performance. The Deepfake Liability Act, Take It Down Act, and Protecting All Digital Realities Act aim to make platforms and creators more accountable for harmful deepfakes and to streamline takedowns.

Proposed transparency measures signal where law is heading. California's AI Transparency Act would require detection tools for AI-modified media. New York's synthetic content provenance bill would push AI systems to embed cryptographic provenance data, creating a verifiable trail of how an asset was generated. These proposals suggest watermarking and provable origin information for AI content will soon be expected rather than optional.

Copyright and Training Data Create Separate Liability

In Thaler v. Perlmutter, the D.C. Circuit confirmed that purely AI-generated works without human authorship are not copyrightable under current U.S. law. Brands using AI to generate prints, patterns, or imagery need meaningful human creative input to claim copyright protection.

AI training data is under increasing scrutiny. The proposed Generative AI Copyright Disclosure Act would require AI developers to disclose their use of copyrighted works in training data. California's AB 412 would require developers to obtain permission and pay licensing fees to use copyrighted works for training.

Traditional advertising rules apply regardless of whether content is AI-generated. A synthetic model or AI-generated endorsement that misleads consumers can trigger false advertising and unfair or deceptive practices claims just like any other campaign.

Biometric Data and Privacy Laws Add Another Layer

Illinois's Biometric Information Privacy Act and California's CCPA/CPRA regulate how brands collect and use face scans for virtual try-on services, body scans for size and fit tools, and voiceprints for voice-based experiences. Failures in obtaining consent or handling this data can lead to statutory damages and class actions.

The European Union's Artificial Intelligence Act introduces a risk-based framework with transparency obligations for AI-generated and deepfake content. Multinational brands using virtual try-on tools, recommendation engines, and synthetic models in Europe must align with those standards.

Contracts Need to Address AI Use Explicitly

Many existing modeling agreements were drafted before AI tools existed. They grant rights to use "images," "photographs," or "recordings" but do not contemplate synthetic media, digital replicas, or AI-generated derivatives.

This gap creates a common scenario: a model is hired for a single campaign. After the shoot concludes, the brand uses AI to digitally replicate the model's likeness and place it into additional outfits, campaigns, or promotional formats that were never discussed or approved. From the brand's perspective, this feels efficient. From the model's perspective, it represents an unauthorized expansion of use that may affect future bookings or dilute exclusivity.

Updated agreements should spell out whether the brand will create digital replicas, scans, or 3D models; where and how those assets may be used, including specific campaign names, channels, territories, and time periods; and whether talent imagery may be used to train internal or third-party AI systems. Compensation for AI-related uses should be addressed separately from standard day rates or flat fees.

Models should seek explicit prohibitions on AI-generated replicas or derivatives absent separate written consent. Approval rights-the ability to review and approve AI-generated content before publication-are critical where such content could affect brand alignment or public perception.

Higher-profile talent may negotiate additional protections: restrictions on sensitive product categories, guards against political or controversial uses, and provisions addressing reputational harm.

Brands Need Internal Processes, Not Just Legal Language

Contract drafting is only one part of the solution. Many brands are forming cross-functional AI committees that bring together legal, marketing, HR, and IT to inventory tools; map how they touch talent images, third-party content, consumer data, and creative IP; and flag higher-risk uses before launch.

Vendor management is becoming more rigorous. Brands are asking business partners how tools are trained, whether they support watermarking and provenance, and demanding indemnification for claims related to AI use.

Workflows that include obtaining informed consent-especially for biometric scans-and for labeling AI-generated or heavily altered content where required by law are becoming standard operating procedures.

Done thoughtfully, AI can deliver real efficiencies: fewer returns, better fit and shade matching, faster campaign production, smarter inventory decisions, and more relevant product recommendations. The difference between competitive advantage and a public relations crisis often lies in how a company treats the humans behind the data.

AI for Legal Professionals and AI Learning Path for Paralegals offer resources for legal teams managing these emerging compliance obligations.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)