Japan commits ¥1 trillion ($6.34B) to build a domestic AI foundation model
Japan is putting serious money behind homegrown AI. The government plans to fund a new public-private company with around ¥1 trillion over five years to build the country's largest foundation model and the infrastructure to support it.
About 10 firms will found the company, including SoftBank Group. Roughly 100 engineers from SoftBank and Preferred Networks are expected to staff the effort early. The support program is slated to start in fiscal year 2026, beginning next April.
Why Japan is doing this
- Compete with the U.S. and China on core AI capability tied to national security.
- Secure compute: government backing may help with semiconductor procurement during global shortages.
- Build domestic capacity: fund data centers, reduce the digital divide, and keep critical AI talent and IP onshore.
- Push "physical AI": tighter integration of models with robotics, an area where Japan wants a stronger position.
What this means for IT and development teams
Expect a Japan-first stack to emerge: foundation models tuned for Japanese language, local regulations, and domestic industry data. That can mean lower latency via regional data centers and clearer compliance for sensitive workloads.
Vendors should anticipate procurement cycles that favor domestic AI, chips, and cloud capacity. For engineering orgs, plan for integration with new APIs and toolchains as this ecosystem comes online.
Timeline and structure
- Five-year government support starting FY2026.
- Newco formed by ~10 firms; SoftBank among the anchors.
- Initial technical core: ~100 engineers from SoftBank and Preferred Networks.
- Mandate: develop the country's largest foundation model and the compute to run it.
Practical takeaways for builders
- Data readiness: curate Japanese-language and industry-specific datasets with clear governance for future fine-tuning.
- Infrastructure: prepare for model hosting on Japan-based clouds or colo; revisit latency, data residency, and energy budgets.
- Security: map AI supply-chain risks (chips, frameworks, model weights) and align with likely government guidance.
- Robotics: evaluate pipelines that combine perception, control, and LLM planning if you build for warehouses, logistics, or manufacturing.
- Hiring and skills: upskill teams on foundation-model ops, evaluation, and safety benchmarks to shorten adoption time when the stack ships.
What to watch next
- Cabinet approval of the basic AI plan and funding release schedule.
- Semiconductor supply deals and domestic data center incentives.
- Early model previews, API access, and participation programs for enterprises and universities.
Level up your team
If you're planning pilots or want to pressure-test your AI roadmap before Japan's models land, explore hands-on programs and certifications focused on foundation models and MLOps. A good starting point is the latest curated training here: Complete AI Training - Latest AI Courses.
Your membership also unlocks: