51% of Japanese Game Companies Use AI in Development, Tokyo Game Show Organizer Finds
51% of Japanese studios use generative AI, led by visual assets, then text and coding; 32% apply it to in-house engines. Capcom, Sega, and Level-5 pilot tools as policies mature.

Over 50% of Japanese game companies now use AI in development
Japan's Computer Entertainment Supplier's Association (CESA) previewed its 2025 industry report: 51% of surveyed domestic game companies are using generative AI in development. The survey covered 54 Japanese studios from June-July 2025, spanning majors like Capcom and Sega to mid-sized and indie teams. The top use is visual asset generation (characters, props, backgrounds), followed by story/text, then programming support. Notably, 32% report using AI to help build in-house engines.
Source context matters. The research is from CESA, which runs Tokyo Game Show, and was also reported by The Nikkei. This isn't hype-these are production workflows moving faster and getting cheaper.
Where AI is actually used
- Visual assets: concepts, background fills, variant generation, and upscaling.
- Story and text: NPC barks, item descriptions, quest scaffolds.
- Programming support: boilerplate, refactors, tests, and editor tooling.
- Engine work: 32% say AI assists internal engine development.
Who's doing what
Level-5 has publicly used Stable Diffusion for upscaling, artist references, and parts of in-game backgrounds, plus GitHub Copilot for coding. Their approach treats AI as scaffolding to speed art and engineering without replacing core craft.
Capcom's technical group has tested models like Gemini Pro, Gemini Flash, and Imagen for brainstorming and prototyping background assets (e.g., TVs and set dressing). The intent: shift hours from filler assets to high-impact content.
Sega formed an internal Generative AI Committee to systematize adoption across image, motion, and code for internal testing. Nintendo remains cautious, citing IP rights concerns and a wait-and-see stance.
Why this matters for engineering leads
- Backlog pressure: background props, filler VO, and UI micro-assets can bottleneck sprints; AI clears that lane.
- Faster prototyping: previsualization and graybox art arrive earlier, reducing design churn.
- Shifted spend: more budget for hero assets and polish, less for repetitive work.
- Policy gap: without guardrails, you risk IP contamination and style drift.
Policy and pipeline, in plain terms
- Pick 2-3 high-friction targets: background props, NPC barks, test maps. Prove value there first.
- Set a review gate: human-in-the-loop for legal, style, rating, and performance checks.
- Track everything: store prompts, seeds, model versions, and outputs in VCS or your DCC's asset DB.
- IP safety: prefer enterprise or on-prem models; avoid uploading proprietary art/code to public endpoints.
- Licensing: document model licenses, training data sources, and vendor ToS; keep audit logs.
- Quality bars: reference sets and A/B tests for style match, artifact rate, and edit time per asset.
- Cost control: batch jobs, cache outputs, quantize models, and measure unit cost per accepted asset.
- Security: redact secrets, use scoped API keys, and isolate inference from production networks.
A 30-60 day rollout
- Weeks 1-2: Define use cases, pick models/tools, draft legal+security guidelines, set acceptance criteria.
- Weeks 3-4: Build a thin integration (DCC plugin or CLI), add prompt/version logging, wire into CI for test assets.
- Weeks 5-8: Run a pilot on one feature team, review metrics weekly, expand SKUs only after pass rates stabilize.
Legal risks to front-load
- Training data provenance and third-party style rights.
- Likeness issues for characters and VO; obtain releases when needed.
- Vendor terms (reuse of inputs/outputs), data residency, and export controls.
- Rating compliance and content filters to avoid sensitive or disallowed material.
What to watch next
Expect more formalized committees and toolchains as teams standardize on image/motion/code models. Keep an eye on platform-holder policies and the full CESA report for updated adoption patterns and process benchmarks.