Is Hollywood "cooked" by AI? The truth is messier - and more useful - than the hype
Every week, another viral AI clip lands with the same punchline: "Hollywood is cooked." Video models like Seedance 2.0 flood feeds with deepfaked celebrities and glossy, physics-bending set pieces.
It makes for great clicks. It says less about how films actually get made.
What insiders are saying
Janice Min - a longtime Hollywood operator - says the quiet part out loud: "Everyone's lying just a little bit… Studios are lying about how much they're using it." When pressed if that means more or less, her answer was simple: "More."
She also claims many writers bounce ideas off chatbots. "I dare you to find a screenwriter… not talking to Claude or ChatGPT at the same time." Whether that's widespread or not, the stigma is fading behind closed doors.
The quiet adoption (and why you're not hearing about it)
Last year, "The Brutalist" drew heat after its director confirmed AI was used to enhance the Hungarian accents of Adrien Brody and Felicity Jones. According to Min, that wasn't an outlier - it was a hint of the new normal.
Publicly, the controversy cooled. Even the Academy hasn't taken a firm line; the culture feels "don't ask, don't tell." Min goes as far as to say every Best Picture nominee likely used AI somewhere in the process. Treat that as a claim, not gospel - but the direction of travel is clear.
AI means many things - most of them unsexy
"AI" is an umbrella. Some of it is generative: text-to-video, image synthesis, voice cloning. Some of it is old hat: denoising, upscaling, motion tracking, rotoscoping. The latter have lived in post-production pipelines for years.
So, yes, Hollywood uses "AI." A lot of it looks like cleaner plates, faster VFX turnarounds, and fewer late nights on paint-outs - not instant movies pressed from a prompt.
The gap between sizzle and reality
Viral AI stunts are often theater. That rooftop brawl "starring" Tom Cruise and Brad Pitt? A digital reskin of two stunt performers on a green screen. The spectacle sells the idea that models can conjure full productions at the click of a button.
In practice, the last 10 percent of quality - nuance, continuity, performance - still costs time, money, and human judgment. The internet rarely posts that part.
Where the line gets drawn (for now)
- Writers: Many are testing chatbots for beat sheets, alt lines, and research - while guarding voice and structure. The 2023 WGA deal set boundaries on credit and compensation for AI-assisted work. See the WGA guidance here.
- Actors: Consent, compensation, and control over digital doubles are front and center. SAG-AFTRA's AI FAQs outline minimums and approvals. Read them here.
- Studios: Quiet experimentation in scripting, previs, localization, and VFX. Public posture: cautious. Private reality: test-and-learn.
Reality check for the "it's over" crowd
Generative tools are getting better. They will compress timelines and budgets in specific lanes: previs, background plates, alt takes, localization, marketing assets. That doesn't erase development, performance, direction, or taste.
If anything, it raises the bar. If anyone can crank out spectacle, the scarce asset becomes voice, credibility, and trust.
Practical playbooks
For PR and communications
- Adopt a simple disclosure rule: "We use AI for efficiency, not to replace credited human roles." List the categories (e.g., accent cleanup, background clean-up, subtitle drafts).
- Pre-bake statements for three scenarios: mislabeled deepfake, AI use in post revealed by press, vendor overclaiming capabilities. Keep them short and factual.
- Maintain an AI use log per project. If questioned, you can speak concretely instead of dodging.
- Train spokespeople on terms that matter: generative vs. assistive, training data, consent, likeness rights.
For IT and development
- Stand up a model evaluation rig: quality, cost per minute/frame/word, latency, failure modes. Treat models like vendors - SLAs or it didn't happen.
- Data governance: no uploading scripts, dailies, or actor scans to public endpoints without contract and consent. Use red-teaming and prompt logging.
- Provenance: enable watermarking, content credentials, and pipeline metadata. Make it easy to prove what's real.
- Sandbox high-risk use cases (voice, likeness, translation). Ship only with sign-offs from legal and production.
For producers and creatives
- Use chatbots for volume (alt lines, beat variations, research), then rewrite for intent and tone. Keep your voice; let the model do the grunt work. Explore ChatGPT workflows if you're formalizing this step.
- Treat video models like fast previs: storyboards, stunt planning, mood clips. Final shots still need direction, lighting, and performance. For a broader view of tools and limits, see Generative Video.
- Localization: accent enhancement, ADR matching, and subtitle drafts are practical wins - with clear on-screen or credit disclosures where appropriate.
- Credit humans clearly. If AI touched the work, say how. You'll earn trust instead of losing it in a leak.
A studio-ready AI policy you can adopt by Friday
- Consent-first: no training on or cloning of any performer's voice/likeness without written approval and payment terms.
- Human authorship: final writing and edit decisions owned by named creatives; AI cannot hold authorship or replace earned credits.
- Approved tools list with version locks; all other tools require security and legal review.
- Provenance on by default; store audit trails for model prompts, outputs, and approvals.
- Plain-English disclosure in marketing and press notes covering where AI assisted.
The signal beneath the noise
Studios are using more AI than they admit. Creatives are experimenting more than they claim. And the loudest AI demos are often more sizzle than steak.
The winners won't be the ones yelling "It's over." They'll be the ones who ship faster, keep taste intact, clear rights, and communicate honestly about their tools.
Your membership also unlocks: