Games Workshop draws a clear line on AI: people first
While many companies race to bolt AI onto everything, Games Workshop is taking the opposite approach-and saying it out loud. CEO Kevin Rountree has set a firm, cautious policy that centers human creators, brand safety, and IP protection.
"We have agreed [on] an internal policy to guide us all, which is currently very cautious," he said. "We do not allow AI-generated content or AI to be used in our design processes or its unauthorised use outside of [Games Workshop], including in any of our competitions."
What Games Workshop is doing
The stance isn't just about creative standards. Rountree flagged the security risks of everyday devices now shipping with AI features built in. "We also have to monitor and protect ourselves from a data compliance, security, and governance perspective; the AI or machine learning engines seem to be automatically included on our phones or laptops whether we like it or not."
AI isn't ignored-just handled with discipline. A small group of senior managers is tracking the tech, though "none are that excited about it yet." The company will continue studying it while "maintaining a strong commitment to protect our intellectual property and respect our human creators."
Back half of 2025, the company focused on hiring more creatives-concept artists, writers, and sculptors-to keep Warhammer's look, lore, and models distinctly human. As Rountree put it, these are "talented and passionate individuals that make Warhammer the rich, evocative IP that our hobbyists and we all love."
Why this matters to leaders in management, PR, and communications
- It's a clear, defensible policy: define where AI is allowed-and where it isn't. That reduces brand risk and internal confusion.
- Security is part of the AI conversation. Treat consumer AI features on laptops and phones as potential data leaks.
- Signal your values. Publicly prioritizing creators and IP earns trust with fans, media, and partners.
- Keep a small oversight group. You don't need full deployment to stay informed and advise the business.
- Invest in humans. Hiring and developing creative talent backs up the message and preserves differentiation.
A quick playbook you can adapt
- Write a short internal AI policy: what's banned (e.g., training or generating creative assets), what's allowed (e.g., internal research), and who approves exceptions.
- Lock down endpoints: disable unvetted AI features on managed devices, update data handling rules, and audit access logs.
- Create an AI oversight cell: 3-5 cross-functional leaders who track developments and brief the exec team quarterly.
- Protect IP at the edges: update competition rules, vendor contracts, and creator guidelines to prohibit unauthorized AI use.
- Communicate clearly: share the stance with staff and fans, and highlight investments in human creators.
If you need a fast primer for cross-functional teams on safe, practical AI literacy-without pushing production use-browse courses by job or the latest AI courses.
For legal and policy teams grounding their approach to IP, this overview from the World Intellectual Property Organization is a useful reference point.
Your membership also unlocks: