Generative AI Could Cut AAA Dev Cycles To 2 Years, Says Level-5 CEO Akihiro Hino
December 27, 2025
Level-5 CEO Akihiro Hino says generative AI can shrink AAA development cycles to roughly two years. He also suggests many studios already use AI for efficiency but aren't public about it. His stance is simple: treating AI as "evil" slows progress.
Story highlights
- Hino believes AI can compress AAA timelines from 5-10 years to about 2 years.
- Studios are using AI behind the scenes for faster iteration and lower costs.
- Demonizing the tech will delay better tools, workflows, and outcomes.
What Hino is arguing
In recent posts, Hino said AI can cut AAA development to two years and that many teams already apply it quietly for throughput gains. He argues the tech should be viewed as leverage for progress, not a threat.
This matches a broader industry shift. Giants like Sony are investing in AI research to improve tools and workflows, from content generation to testing and simulation. See Sony's public AI work for context: Sony AI.
Why this matters for engineering and production
- Shorter cycles reduce risk exposure and budget overruns.
- More frequent releases mean tighter feedback loops and fewer dead ends.
- Tooling-first approaches can move teams from heroics to repeatable systems.
- Automation frees experts to focus on core design, combat feel, and polish.
How a two-year AAA could work (practical outline)
- 0-3 months (Foundations): Define scope, greenlight tech stack, centralize datasets, evaluate off-the-shelf models vs. fine-tunes, set review gates.
- 3-9 months (Vertical slice): AI-assisted prototyping for core loop, blockouts, placeholder VO/SFX, rapid balancing sims, CI with AI-generated tests.
- 9-18 months (Production): Scaled asset generation with human art direction, automated localization drafts, LLM copilots for tools and gameplay code, physics/tuning sims.
- 18-24 months (Polish): AI triage for bug deduplication, auto-regression tests, model-driven performance profiling suggestions, content QA with human sign-off.
Where AI fits in the toolchain
- Content: Concept exploration, material variants, LOD suggestions, animation clean-up, placeholder VO/NPC chatter.
- Code: Copilots for engine tooling and gameplay scripts, refactors, unit test generation, migration support.
- Design/UX: Encounter prototyping, narrative beat drafts, pacing analysis, telemetry-informed tuning.
- QA/Build: Test case generation, crash log clustering, flaky test detection, smart build notifications.
- Localization/Support: First-pass translations, string consistency checks, glossary enforcement.
Risks and guardrails
- IP and licensing: Use models with clear training data lineage. Contract for indemnity where possible.
- Workforce and credits: Keep humans in the loop. Document authorship, approvals, and sign-offs.
- Security and privacy: No sensitive data in prompts. Prefer on-prem or VPC endpoints. Log and audit.
- Quality and reliability: Establish eval sets, golden prompts, and acceptance thresholds per discipline.
- Voice/likeness concerns: Written consent for synthetic voices. Watermark placeholders. Replace before ship if needed.
- Versioning and reproducibility: Pin model versions, prompts, and seeds. Treat them like code.
Fan backlash is real-manage communication
Level-5 has already taken heat for using tools like ChatGPT for character design in series such as Inazuma Eleven and Professor Layton. The sentiment is clear: players want assurance that AI isn't replacing craft.
For studios, the path forward is transparency without hype. Share where AI assists (prototyping, testing, localization drafts), define human review gates, and commit to artistic standards. Report measurable wins: fewer regressions, faster iteration, better framerate stability.
What to do next if you run an engineering org
- Stand up a small AI platform team to centralize models, data governance, and evals.
- Pick two high-impact pilots (e.g., QA triage, localization drafts). Ship value in 6-8 weeks.
- Create discipline-specific playbooks: art, code, design, audio, QA.
- Integrate model telemetry, cost tracking, and SLA alerts into your existing observability stack.
- Train teams on prompt patterns, safety, and review protocols. Keep it pragmatic.
Bottom line
Hino's two-year target is aggressive but not fantasy if teams systematize AI where it compounds: prototyping, testing, and repetitive production tasks. The win isn't replacing people-it's removing drudgery so experts can ship better games, faster.
If you're planning upskilling for leads and ICs, this catalog is a quick way to find relevant courses by role: Complete AI Training - Courses by job.
Your membership also unlocks: