Japan's AI Act Is Now Live: What Dev and Product Teams Need to Know
Japan's Act on Promotion of Research and Development, and Utilization of Artificial Intelligence-related Technology (the "AI Act") took full effect in September 2025. The goal is straightforward: push AI innovation while managing risk so people can use AI with trust.
If you build, ship, or operate AI features in Japan-or target Japanese users-this law matters. Here's the practical breakdown.
Why the AI Act exists
AI is now in everything from code assistants to photo filters on phones. Alongside the gains, there's public concern about safety, fairness, and accountability.
Japan crafted this law to close the gap with leading AI nations and to meet user expectations: useful AI, fewer hazards. It draws on the G7's Hiroshima AI Process to promote research and deployment with risk in mind.
Japan's approach: promote first, mitigate smartly
Rather than heavy new rules, Japan leans on existing laws (e.g., the Penal Code) and ministry-issued guidelines. The government will publish AI guidelines, investigate malicious incidents, and issue guidance or advice based on findings.
This keeps room for fast iteration while still addressing misuse. Expect updates as new risks appear.
Core structures the law creates
- AI Strategic Headquarters: Chaired by the Prime Minister; all Cabinet ministers are members. It coordinates national AI policy.
- AI Basic Plan: A government-wide plan that sets baseline AI policy from research through utilization. Scheduled to be finalized within the year.
- Guidelines: Issued in line with international norms to support transparency, fairness, and safety in R&D and deployment.
- Shared responsibility: Central/local governments, R&D institutes, businesses (including developers and suppliers), and citizens each have defined roles. Businesses are expected to cooperate with government measures.
- Scope: Applies to anyone-including foreign operators-conducting AI research, development, or utilization that targets Japanese businesses or citizens.
How it fits globally
The EU leans toward strong safety and fundamental rights protection through new legislation. The US emphasizes innovation and economic growth while addressing security risks.
Japan's model lands in the middle: avoid excessive constraints, respond quickly with guidance, and keep innovation moving.
Generative AI: common risks to expect
- Misinformation and synthetic content that misleads users
- Privacy leaks or re-identification from training or outputs
- IP/copyright issues (training data and generated content)
- Bias and discriminatory outcomes in models or datasets
- Security issues such as prompt injection and data exfiltration
- Overreliance and automation bias in high-stakes workflows
For background on Japan's international coordination, see the Hiroshima AI Process and the OECD AI resources.
What your team should do now
1) Set up AI governance that matches your risk
- Appoint an owner for AI risk and compliance across product lines.
- Maintain a registry of models, data sources, providers, and use cases.
- Classify use cases by impact (e.g., content assist vs. medical triage) and apply stronger controls where needed.
2) Treat data provenance as a first-class requirement
- Record training/fine-tuning sources, licenses, and consent status.
- Strip or protect personal data; apply minimization and retention controls.
- Document dataset curation and approvals for audits.
3) Build safety into the model lifecycle
- Red-team prompts and jailbreaks; test for bias and harmful outputs.
- Apply content filters, guardrails, and rate limits; add human review for high-impact decisions.
- Log inputs/outputs and decisions with traceability to model versions.
4) Ship transparency users can trust
- Label AI features; disclose data use and limitations in plain language.
- Mark synthetic media where relevant; provide user controls and feedback channels.
- Publish model or system cards for key systems.
5) Lock down security for LLM-era threats
- Sanitize inputs; filter untrusted content; isolate tools and connectors.
- Protect secrets in prompts and function calls; enforce least privilege.
- Scan outputs before actions (code exec, API calls, file writes).
6) Update contracts and vendor reviews
- Add clauses for logging, data residency, deletion, and incident reporting.
- Require suppliers to meet safety guidelines and provide evaluation evidence.
- Define model update policies and break-glass procedures.
7) Prepare for guidance and reporting
- Map your controls to international codes of conduct referenced by Japan.
- Keep a lightweight evidence pack: policies, test results, incidents, fixes.
- Assign owners for responding to government guidance or advice.
8) Upskill your team
- Train engineers and product managers on prompt security, data handling, evaluation, and safe deployment patterns.
- If you need structured learning paths, explore role-based options at Complete AI Training.
Hiroshima AI Process and reporting
Japan also supports transparency through the Hiroshima AI Process. A voluntary Reporting Framework tracks company commitments to an international code of conduct for advanced AI systems.
Expect more companies to opt in as the benefits become clear: clearer expectations, easier trust-building with customers, and fewer surprises during audits or government reviews.
What's next
The AI Basic Plan will set the baseline for how ministries coordinate AI policy and guidance across sectors. As more guidelines arrive, keep your governance simple, documented, and testable.
The takeaway: build fast, ship responsibly, and be ready to show your work. That's the spirit of the AI Act-and it's the path to durable products in Japan's market.
Your membership also unlocks: