D&D legend backs Larian as AI debate flares - what it means for working creatives
The AI fight hit another fever pitch after Larian shared it's experimenting with AI for concept art and placeholder text on its next Divinity project, while promising zero AI-generated content in the final game. That sparked a wave of criticism - and a notable defense from a foundational figure in tabletop RPGs, Robert Kuntz.
What Kuntz actually said
Robert Kuntz, one of the original D&D designers, urged fans to "lighten up on the creatives who work their arses off day in and day out," extending support to Larian and CEO Swen Vincke. In a follow-up, he dismissed the harsher takes as "eye-popping rage bait" from "bad actors," adding that the RPG community he helped build used to show more appreciation and civility toward creators.
Agree or not, the message is clear: creators are under pressure, and discourse is getting meaner. That helps no one - not teams shipping work, not communities waiting on great games.
Where Larian stands on AI
Vincke has acknowledged AI is a hot-button issue and said a lot was "lost in translation" around previous comments. Larian says it's evaluating new tech and using AI as a research tool in early stages - not for shipping content. An AMA is planned to clarify the approach and answer questions directly.
If you want the studio's official stance, start here: Larian Studios.
Why this matters to creatives
Every creative field is wrestling with the same question: how do you explore new tools without burning trust? The answer isn't "ban everything" or "automate everything." It's policy, process, and honest communication.
Even outside games, teams are testing AI for brainstorming, references, and placeholder content - while keeping final outputs human-made. That middle path requires boundaries you can defend publicly.
Use AI without burning trust: a practical playbook
- Declare the line. If AI won't touch final assets, say it. Put it in writing and stick to it.
- Label experiments. If AI is used for mood boards or placeholder text, label it internally so it never ships by accident.
- Keep human review final. A named person signs off on every deliverable. No exceptions.
- Track sources. Maintain an audit trail for references, prompts, and datasets where applicable.
- Share your policy early. Don't wait for a backlash. Publish a short FAQ and update it as your process matures.
- Invite questions. AMAs, office hours, or community Q&A sessions go a long way to reduce speculation.
- Test for quality, not hype. If AI doesn't make the work better or faster without hurting ethics or IP, drop it.
The bigger conversation
Another high-profile dev recently warned AI could threaten big publishers - then added he'd still use it if it let a smaller team ship an epic game faster. That contrast captures the moment we're in: hope, fear, and a lot of noise.
Creators don't control the debate, but we do control our process and how we communicate it. Lead with clarity, show your work, and keep the final say human.
For teams building their AI policy
If you're setting up guidelines for your studio or creative team and want structured options by role, this curated index can help: AI courses by job.
Respect the craft. Share the process. Ship work you're proud of.
Your membership also unlocks: