Over Half of Japan's Game Studios Now Use AI, Including Capcom and Level-5
AI use has passed 50% in Japanese game studios, spanning art, writing, code, and even engine work. Leaders should formalize workflows, guard data, and upskill teams to keep pace.

AI hits majority adoption in Japanese game development: what it means for engineering teams
A new survey from the Computer Entertainment Supplier's Association (CESA) shows AI use has crossed a key threshold in Japan. Over half of member studios report using AI in development, with applications spanning art, writing, code, and even engine work.
For IT and Development leaders, this isn't hype. It's a signal to formalize AI workflows, governance, and skills-or risk lagging behind teams that are already compounding speed and experimentation.
Key data points
- 51% of surveyed Japanese game companies are using AI in some capacity (survey conducted during June and July).
- Use cases: visual asset generation, story and text generation, and programming assistance.
- 32% report using AI to help develop in-house game engines.
- Studios openly using AI include Level-5 and Capcom. Nintendo is abstaining for now, citing copyright concerns.
- Larian's Swen Vincke frames AI as a speed tool, not a replacement for human creativity, and recently expanded the concept art team instead of offloading to AI.
Where teams are getting value
- Art and content: concept iterations, upscaling, reference sets, and background elements to reduce manual grind.
- Engineering: code suggestions, boilerplate generation, and refactoring aides.
- Tools and engines: prototyping features, asset pipelines, and support tooling around proprietary tech.
Why some studios are cautious
- Copyright and data provenance risks around training data and outputs.
- Creative quality: risk of sameness, weak art direction, and loss of studio identity if AI is overused.
- Production control: unclear review standards, versioning of AI assets, and difficulty tracing source material.
What this means for IT and Development
The adoption rate implies new baselines for speed and iteration. If your pipeline doesn't account for AI, you'll feel it in delivery dates, content volume, and tool velocity.
- Set policy now: define allowed tools, data use, licensing constraints, and review gates for AI-derived assets.
- Instrument for quality: add checks for art direction, code correctness, and narrative coherence; require human sign-off.
- Secure your inputs: isolate proprietary data, scrub prompts, and log generations for auditability.
- Focus on augmentation: use AI for exploration, boilerplate, and cleanup-protect signature style and core design decisions.
- Upskill the team: prompt standards, tool fluency, and guardrail know-how are the new fundamentals.
Practical next steps
- Run a 4-6 week pilot in one pipeline slice (e.g., concept art iteration or gameplay scripting helpers). Track cycle time, defect rate, and review effort.
- Create a shared prompt and review playbook. Include do/don't lists, asset tagging, and handoff rules.
- Stand up a model policy: what's approved (SaaS vs. local), data retention rules, and licensing checks for outputs used in shipped content.
- Add CI checks for AI-assisted code (tests, static analysis, and ownership labels).
- Establish an "AI librarian" role to maintain prompts, templates, and examples that actually work in your stack.
Resources
- Computer Entertainment Supplier's Association (CESA)
- AI upskilling paths by job role - Complete AI Training
- AI tools for code generation - curated list
Bottom line: treat AI like any other production tool-measure it, gate it, and use it to remove drudge work. Keep human taste, craft, and IP safety front and center.