Guillermo del Toro says he'd rather die than use AI, calls it Frankenstein hubris

Guillermo del Toro draws a hard line: he'd rather die than use generative AI, tying it to the hubris in his Frankenstein. His message: human craft first, consequences considered.

Categorized in: AI News General Writers
Published on: Oct 26, 2025
Guillermo del Toro says he'd rather die than use AI, calls it Frankenstein hubris

Guillermo del Toro draws a hard line on AI in filmmaking

As the debate around AI keeps getting louder in Hollywood, Guillermo del Toro wants no part of it. The 3x Oscar winner said he would "rather die" than use generative AI in his films, calling the tech an echo of the arrogance he sees in his adaptation of Mary Shelley's Frankenstein, led by Oscar Isaac as the titular scientist.

"AI, particularly generative AI - I am not interested, nor will I ever be interested," he said in a recent interview. "I'm 61, and I hope to be able to remain uninterested in using it at all until I croak. ... The other day, somebody wrote me an email, said, 'What is your stance on AI?' And my answer was very short. I said, 'I'd rather die.'"

He sharpened the point: "My concern is not artificial intelligence, but natural stupidity." In other words, it's not the code-it's the people steering it without foresight.

Del Toro also drew a clear parallel to the story he's telling. He sees his Frankenstein as a mirror to tech culture: creators building because they can, not because they should. "He's kind of blind, creating something without considering the consequences and I think we have to take a pause and consider where we're going."

This isn't a soft stance. At a recent New York City screening, he punctuated it with, "fuck AI!" That tells you where the line is for him: human craft first, human judgment always.

What this means for writers and creators

  • Pick a side on purpose. If you keep AI out of your process, commit to the discipline, voice, and taste that make your work unmistakably yours.
  • If you use AI, define boundaries. What's off-limits (voice, ideas, structure)? What's allowed (research, formatting, outlines)? Put it in writing-especially with clients and collaborators.
  • Think in consequences, not conveniences. Ask: who benefits, who's harmed, and what happens if this scales?
  • Keep your fingerprints on the work. Readers can feel when a piece was lived, observed, and edited by a human who cares.
  • Credit and consent matter. Don't feed private materials or others' work into tools without permission.

A simple framework to stress-test your creative process

  • Clarity: Why are you making this? What do you want the audience to feel or do?
  • Constraints: What must remain human? Where can tools support without steering?
  • Checks: How will you review for accuracy, ethics, and originality?
  • Continuity: Can you defend your process a year from now with the same pride?

Del Toro's stance isn't about fear. It's about responsibility. Tools change. The standard you hold doesn't have to.

Bottom line: Create with consequence. Protect the parts of your process that make your work unmistakably yours-and if you experiment with AI, do it with guardrails, consent, and taste.

If you decide to explore AI tools responsibly-without outsourcing your voice-this curated set of learning paths by role can help you set clear boundaries and use cases: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)