"Personally, I hate it": Tarsier Studios' narrative director on AI in game development
Tarsier Studios, the Swedish team behind Little Nightmares, is heads-down on its next project, Reanimal. It's set to launch in February 2026 on PC, PlayStation 5, Switch 2, and Xbox Series S/X.
In a recent conversation, narrative director Dave Mervick shared a candid take on AI. "There isn't a studio consensus on AI. Personally, I hate it, but that's because I haven't seen enough of its positive use as an artistic tool, and because I've seen The Terminator too many times, but as with any technology, it depends on how it is used. The internet can be a glorious thing and an utter sewer, social media can connect people and isolate them, so it remains to be seen how AI will evolve. It could become a creatively bereft human centipede, or it could be a tool that democratizes the art world and unlocks the creativity inside people. I can only hope for the latter."
Why this hits home for dev teams
Many studios are split on AI. Engineers see speed, producers see risk, and creatives worry about taste and trust. Mervick's stance reflects a common concern: AI can speed up production, but it can also flatten voice, style, and originality if used carelessly.
If you build or lead pipelines, the useful question isn't "AI or no AI?" It's "Where does AI add value without eroding craft, rights, or player trust?"
Practical guardrails for using AI in game production
- Prototype, don't publish: Use AI for concept exploration, placeholder VO, temp text, and quick references. Keep final art, narrative, VO, and music human-led with clear approvals.
- Protect your inputs: Don't train on proprietary or unlicensed datasets. Audit model sources, keep a license register, and require vendor data processing agreements.
- Human-in-the-loop by default: Writers and art leads should own taste checks. Lock style guides and run consistency reviews before anything ships.
- Disclose and document: Mark AI-assisted assets in your asset DB. Track prompts, versions, and model settings for later audits.
- Code assistance with guardrails: Allow generation in sandboxes only. Enforce linting, tests, SCA/DAST, and security review on all AI-written code.
- Performance and privacy: Prefer on-prem or private endpoints for sensitive content. Budget GPU/VRAM and latency early to avoid surprises.
- Union and likeness issues: No synthetic VO or image likeness without explicit contracts. Respect regional labor rules.
- Bias and cultural checks: Run pre-flight reviews for sensitive themes, character design, and localization. Don't outsource taste.
Leadership takeaways
- Set a written AI policy: What's allowed, what's not, and how exceptions are approved. Keep it short, enforceable, and updated each quarter.
- Map risks to gates: Add checks to greenlight, content review, legal, and pre-ship. Treat AI like any other external dependency.
- Train your team: Teach prompt hygiene, license basics, and failure modes. Make it part of onboarding.
- Respect authorship: For narrative, protect voice by treating AI as an assistant, not an author. Credit humans who do the real work.
For context, Tarsier created Little Nightmares and later handed the series to Supermassive Games, while the studio now focuses on Reanimal. That kind of handoff puts creative identity under a microscope-another reason teams care so much about how tools influence taste and tone.
If you want a sense of where Tarsier is heading next, keep an eye on Reanimal and the studio's updates at tarsier.se. The full interview will be published on Gamereactor soon.
Looking to upskill your team responsibly without burning time? Browse role-based AI courses and certifications here: Complete AI Training - Courses by Job.
Your membership also unlocks: