Games Workshop bans AI in design to protect Warhammer's human creators

Games Workshop bans AI in design and competitions to protect IP and human creators. The takeaway for managers: set clear no-go zones, approve safe uses, and keep learning.

Categorized in: AI News Management
Published on: Jan 17, 2026
Games Workshop bans AI in design to protect Warhammer's human creators

Games Workshop bans AI in design: what managers should take away

Games Workshop has drawn a hard line: no AI-generated content and no AI use in design processes. CEO Kevin Rountree called the company's internal policy "very cautious," adding: "We do not allow AI-generated content or AI to be used in our design processes or its unauthorised use outside of Games Workshop including in any of our competitions."

The company also left room for learning. A few senior managers will "continue to be inquisitive about the technology," while the business maintains a strong commitment to protect human creators and its intellectual property.

Why this matters for your team

This move is about control-IP ownership, provenance, and data risk. As the company noted, "AI or learning engines seem to be automatically included on our phones or laptops whether we like it or not." That means shadow use is likely, even if your policy says otherwise.

Clear boundaries beat blanket bans that no one can follow. If people know where AI is off-limits and which tools are approved, they're less likely to work around you.

Expert guidance in plain English

Lucinda Reader advises: "[Policies about AI] should not be created in isolation. The business needs to be engaged, and guidelines should be agreed with the people who understand the work, and the risk." She also warns: "Caution around AI is sensible, but pretending it is not already part of day-to-day work is unrealistic."

Kelly Dolphin adds: "HR professionals should work closely with both technology teams and the wider workforce to identify key concerns, including data privacy, and where AI can provide genuine support." Her point is practical: align with the people closest to the work, then set rules that are easy to follow.

A policy you can ship this quarter

  • Draw your hard lines: List where AI is prohibited (e.g., customer data, employee data, confidential R&D, legal review). Put this in writing.
  • Approve the safe wins: Identify allowed use cases with guardrails (summaries of non-confidential content, internal knowledge search, basic coding assists without sensitive data).
  • Pick the tools, not just the rules: Vet vendors for data handling, IP terms, and audit logs. Block unapproved tools at the network level.
  • Set governance: Define owners for policy, risk, security, and audit. Require model/version tracking for any AI used in production workflows.
  • Train and inform: Short training on do/don't, examples, and how to disclose AI use in outputs. Make it easy to ask questions.
  • Review often: Quarterly policy reviews with the teams using AI. Retire what doesn't work; expand what does.
  • Create a safe "lab": Like Games Workshop, allow a small senior group to explore new tools under strict rules, so learning continues without risking core work.
  • Document provenance: For creative and public-facing work, require a statement of human authorship and tool disclosure where relevant.

Creative teams and competitions

Games Workshop bans AI in design work and competitions to protect human creators and maintain clean IP ownership. If you run contests or accept submissions, say explicitly whether AI is allowed, how it must be disclosed, and what counts as original work.

If AI is banned in a workflow, define what that means in practice: no AI prompts, no AI upscaling, no AI textures, and no AI edits. Ambiguity is what creates risk.

AI is already on your devices

The company's note about AI being bundled with phones and laptops is a wake-up call. Treat mobile and desktop defaults as in-scope for policy, not an afterthought.

Dolphin's example is a good pattern: "AI-enhanced translation services on mobile platforms can significantly improve communication, but services must be trialled internally and supported with clear training on organisational standards before a company-wide rollout." Pilot first, then scale.

Helpful resources

If you're planning structured training for managers and teams, explore focused pathways on policy, risk, and implementation: AI courses by job role.

Bottom line: Set clear no-go zones, approve safe uses with guardrails, and keep a small path open for learning. That's how you protect your IP without slowing your teams down.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide