No Soulless Side Quests: Embracer CEO's AI Reality Check and Transparency Pledge

Embracer CEO Phil Rogers says AI can aid dev, but misused it makes hollow games without ethics or consent. He vows transparency and rejects generic quests and AI voices.

Categorized in: AI News IT and Development
Published on: Sep 24, 2025
No Soulless Side Quests: Embracer CEO's AI Reality Check and Transparency Pledge

A rare C-suite reality check on AI in game development

At Embracer's 2025 annual meeting in Karlstad, Sweden, CEO Phil Rogers delivered a message most executives avoid: AI is useful, but it can make games feel hollow if misused. He was upbeat about the tech-calling generative AI a "power multiplier" and a "strategic catalyst"-while warning that the biggest risk is deploying it without ethics, consent, or taste.

His commitment was clear: "We're committed to being transparent with players about how, where we use AI in our dev process." He also called out a line many players and devs agree on: "Players aren't longing for generic, soulless side quests or synthetic AI voices."

Context: Embracer's reset

Embracer owns the Lord of the Rings franchise and Tomb Raider studio Crystal Dynamics. After major divestments and layoffs, it plans to spin off studios and rebrand as Fellowship Entertainment, with a focus on big-budget games. This backdrop matters: efficiency is a must, but so is trust with players and creators.

Key signals from Rogers

  • AI is an accelerator, not a substitute for craft. Use it to ship better, not cheaper-looking.
  • Ethics are table stakes. "The greatest risk is not in using AI, but in using it without a strong ethical framework."
  • Transparency by default. Tell players where AI touched the work.
  • Protect talent. "Artists, actors, writers need protection from plagiarism." Consent and compensation aren't optional.

What this means for engineering and content leads

  • Keep humans in the loop. AI drafts; directors, designers, and editors decide.
  • Set a written AI policy. Approved models, data sources, privacy rules, output ownership, and audit logging.
  • Provenance and anti-plagiarism. Maintain source logs, use similarity checks, and adopt content credentials (e.g., C2PA) for assets and VO.
  • Voice rules. No cloning without explicit license and payment. Document consent per line, per language.
  • Quality bars. Define "no generic content" criteria: repetitiveness thresholds, style guides, and mandatory editorial sign-off.
  • Data hygiene. Separate training/eval sets, scrub PII, and forbid model training on unlicensed IP.
  • Player disclosure. Credit screens and patch notes should state how AI was used.
  • Metrics. Track time saved, defect rates, and player sentiment to justify AI usage.

Where AI fits well right now (without losing the soul)

  • Engineering: codemods, boilerplate, tests, static analysis, shader linting.
  • Content ops: asset tagging, LOD generation, animation cleanup, localization first pass.
  • Design support: quest idea generation as briefs (never final), NPC bark variations as drafts.
  • QA and support: bug triage, log summarization, duplicate detection.

What to avoid

  • Shipping AI-written quests or final VO without human authors and actors in control.
  • Training on datasets you can't legally prove you have the right to use.
  • Auto-populating open worlds with filler content to hit "hours played."
  • Replacing test strategy with unvalidated model outputs.

A simple AI policy starter you can deploy this quarter

  • Access control: approved tools list, model versions, and who can use what.
  • Data contracts: allowed data sources, storage locations, retention, and redaction rules.
  • Review gates: human sign-off for narrative, VO, character art, and any player-facing text.
  • Provenance: attach source prompts, references, and licenses to every AI-assisted asset.
  • Disclosure: in-game credits + release notes outlining AI usage areas.
  • Monitoring: log prompts/outputs, run similarity checks, and audit weekly.
  • Incident response: takedown and patch process for flagged content within 72 hours.

What to watch at Embracer/Fellowship

  • Whether credits and patch notes explicitly call out AI usage.
  • Talent agreements that address voice cloning, usage windows, and royalties.
  • AI-assisted workflows in big-budget titles without a drop in narrative or performance quality.

If you want the official company line and future policy updates, check the corporate site: Embracer Group.

For teams pressure-testing their toolchain, this curated index can help you compare coding assistants and guardrail add-ons: AI tools for generative code.

Bottom line: treat AI like a capable junior-fast, tireless, and prone to bland output if left unsupervised. The craft still comes from your writers, actors, designers, and the leaders who set the bar.