Four Asian jurisdictions enacted AI laws in 13 months. They couldn't be more different.
Japan, South Korea, Vietnam, and Taiwan each passed major artificial intelligence legislation between January 2025 and March 2026. The four laws share a common concern: AI governance left to market forces creates accountability gaps. Their solutions diverge sharply.
For legal professionals managing compliance across Asia, the divergence matters immediately. A single AI system deployed across all four markets faces binding conformity assessments in Vietnam, mandatory transparency obligations in South Korea, voluntary governance standards in Japan, and aspirational principles in Taiwan.
Japan: promotion over constraint
Japan's AI Promotion Act, which entered force on June 4, 2025, treats AI as "a foundational driver of Japan's economic and social development." The law creates an AI Strategy Headquarters inside the Cabinet, chaired by the Prime Minister.
The Cabinet-level placement signals that AI strategy is a national executive priority, not a sectoral concern delegated to a single ministry. The Headquarters must draft a Basic Plan on AI and publish it without delay.
The substantive obligations are light. The State must promote research and development, build shared datasets, and ensure guidelines align with international norms. AI-utilizing businesses must cooperate with government measures. The public is asked to deepen understanding.
What the law does not contain is striking. No mandatory pre-market approvals. No risk classification tiers. No administrative fines. No enforcement powers for regulators to access systems or data. The law acknowledges risks-criminal use, data leakage, copyright infringement-but addresses them through transparency in research rather than prohibition or penalty.
Japan committed to active participation in the G7 Hiroshima AI Process, a framework now spanning 66 countries and 38 organizations. The Hiroshima Process Code of Conduct recommends a risk-based approach and identifies 11 action areas. Japan's domestic law sits within that spirit without converting it into binding obligation.
South Korea: obligations, penalties, and a presidential committee
South Korea's Framework Act on the Development of Artificial Intelligence, enacted January 21, 2025 and enforced January 22, 2026, imposes direct obligations on industry. The law establishes a National Artificial Intelligence Committee under the President, composed of up to 45 members including a majority of civilian experts, relevant ministers, and a national security representative.
The Committee deliberates on master plans, research strategy, regulations hindering AI competitiveness, infrastructure expansion, and international norm-setting. When the Committee makes recommendations on statutory or system improvements, government agencies must formulate response plans.
The law defines high-impact artificial intelligence across 11 specific domains: energy supply, drinking water, health and medical services, medical devices, nuclear facility safety, biometric analysis for criminal investigation, decisions affecting individual rights (hiring, loan screening), transportation management, government decision-making, early childhood and secondary education evaluation, and areas designated by Presidential Decree.
Businesses providing high-impact AI or generative AI must notify users before deployment. Generative AI outputs must be labeled as such. Virtual audio, image, or video outputs that are difficult to distinguish from real content require explicit disclosure. Operators of systems meeting a computational threshold must identify, assess, and mitigate risks across the AI lifecycle.
Failure to meet transparency notification obligations carries an administrative fine up to 30 million Korean won. Unauthorized disclosure of Committee deliberations can result in imprisonment up to three years or fines up to 30 million won.
South Korea established an Artificial Intelligence Safety Institute under the Minister of Science and ICT to define AI safety risks and conduct international exchange. A separate Artificial Intelligence Policy Center supports policy formulation and international norm development.
In August 2025, South Korea's Personal Information Protection Commission published AI privacy guidelines for generative AI development, demonstrating that the Framework Act operates alongside existing data protection structures.
Vietnam: the most technically detailed framework
Vietnam's Law on Artificial Intelligence, passed December 10, 2025 and effective March 1, 2026, introduces a three-tier risk classification system with specific procedural requirements for each tier. The law addresses liability, incident management, national infrastructure, and regulatory sandboxes.
High-risk systems are those capable of causing significant damage to life, health, rights, national interests, or national security. Medium-risk systems can cause confusion about whether users are interacting with AI or AI-generated content. Low-risk systems are everything else.
Providers of high-risk systems must undergo conformity assessment before deployment or upon significant changes. Systems on a Prime Minister-designated list require assessment by a registered conformity assessment body. Other high-risk systems may be self-assessed.
High-risk system providers must establish risk management measures, manage training and validation data for quality, compile and store technical documentation and operation logs, design systems to enable human supervision and intervention, and coordinate with authorities on inspection and incident remediation.
The law attaches explicit civil liability. Where a high-risk AI system is managed and operated in accordance with regulations but still causes damage, the deployer bears responsibility for compensating the damaged person. The deployer may seek contribution from the provider, developer, or other parties if an agreement exists. Liability is exempted only where damage occurs entirely due to the damaged person's intentional fault or force majeure.
Vietnam established a National AI Development Fund-an off-budget state financial fund operating on a not-for-profit basis-to mobilize resources for AI research, development, application, and management. The Fund can allocate capital flexibly independent of the budget year.
Foreign providers of high-risk AI systems deployed in Vietnam must maintain a legal contact point in the country. Where mandatory conformity certification is required, foreign providers must have a commercial presence or authorized representative.
Existing operators have 18 months from March 1, 2026 to comply with the law's requirements in healthcare, education, and finance sectors, and 12 months for other sectors.
Taiwan: principles first, sectoral implementation second
Taiwan's Artificial Intelligence Basic Act, promulgated January 8, 2025, is the shortest of the four texts and the most explicitly principle-driven. Its 20 articles establish governance architecture without imposing direct compliance obligations on private actors. Sectoral legislation must be enacted within two years of implementation.
The basic principles enumerated cover seven areas: sustainable development and well-being, human autonomy, privacy protection and data governance, cybersecurity and safety, transparency and explainability, fairness and non-discrimination, and accountability.
The explanatory notes are unusually specific about international sources. Transparency and explainability draws on the EU's Ethics Guidelines for Trustworthy AI. Safety draws on Singapore's Model AI Governance Framework for Generative AI. Sustainability explicitly references the G7 Hiroshima Process International Code of Conduct. The framework is openly intertextual, treating international consensus documents as normative inputs.
The institutional centerpiece is a National AI Strategic Committee convened by the Premier, composed of scholars, experts, AI industry representatives, ministers without portfolio, relevant agency heads, and special municipality leaders. The Committee must meet at least once a year.
The government-not private operators-bears direct obligations. It must prevent AI applications from infringing on people's life, body, freedom, or property; undermining social order or the ecological environment; and engaging in bias, discrimination, false advertising, or misleading information. Where sectoral regulators designate AI products as high-risk in consultation with the Ministry of Digital Affairs, they must be clearly labeled with warnings.
The government must review all laws and administrative measures within two years of implementation and enact, amend, or repeal any that are inconsistent with the Act. That review obligation creates a structured legislative pipeline for Taiwan's regulatory agencies.
How the four frameworks compare
Scope of private obligation. Vietnam and South Korea impose direct obligations on AI developers, providers, and deployers. Japan and Taiwan address requirements primarily to government actors, leaving industry obligations to be developed through subsidiary instruments.
Risk classification. Vietnam and South Korea define high-risk AI through specific domain lists and attach compliance requirements to that designation. Japan and Taiwan establish risk as a concept requiring further elaboration, without immediately triggering private sector obligations.
Enforcement. South Korea specifies administrative fines and criminal penalties. Vietnam establishes civil liability for high-risk AI deployments that cause harm. Japan and Taiwan include no direct penalty provisions for private actors in the current texts.
Generative AI. South Korea addresses generative AI explicitly, requiring labeling of outputs and user notification. Vietnam's medium-risk tier captures systems that cause confusion about whether content is AI-generated. Japan and Taiwan do not address generative AI specifically in their framework legislation.
International alignment. All four acknowledge or draw on international frameworks, including OECD AI Principles and G7 Hiroshima Process materials. The Hiroshima AI Process includes a voluntary reporting framework hosted by the OECD in which 25 companies have published reports. The 2026 Action Plan agreed at the second in-person Friends Group meeting in Tokyo includes outreach efforts, knowledge-sharing seminars, and interoperability studies.
What this means for compliance teams
A marketing platform deploying an AI-powered bidding or content generation system across all four markets would face binding conformity assessment obligations in Vietnam, transparency and labeling obligations in South Korea, voluntary governance standards in Japan, and aspirational principles with pending sectoral implementation in Taiwan.
South Korea's definition of high-impact AI explicitly includes "judgments or evaluations that have a significant impact on the rights and obligations of individuals, such as hiring and loan screening." AI systems that make consequential decisions about which individuals see which offers could reasonably fall within that framing.
Vietnam requires conformity assessment before deployment for systems on a Prime Minister-designated high-risk list. By March 2026, that list had not been published, leaving operators in a transitional window.
Japan's lightweight framework may appear commercially attractive, but the Hiroshima Process context matters. The HAIP Reporting Framework, which lists participants publicly on the OECD website, is functioning as a soft accountability mechanism. Companies that submit reports face scrutiny from civil society and benchmarking by competitors. The 2026 Action Plan specifically calls for outreach to increase reporting diversity and volume. That is a market dynamic brand-sensitive operators will need to monitor.
Taiwan's explicit cross-referencing of international frameworks positions the jurisdiction as a rule-taker aligned with global norms rather than an independent regulatory actor. That may ease market access concerns for international AI developers seeking predictable operating environments.
Timeline
- January 8, 2025: Taiwan promulgates Artificial Intelligence Basic Act
- January 21, 2025: South Korea enacts Framework Act on the Development of Artificial Intelligence
- June 4, 2025: Japan promulgates AI Promotion Act, establishing AI Strategy Headquarters within the Cabinet
- August 2025: South Korea's Personal Information Protection Commission publishes AI privacy guidelines for generative AI data processing
- December 10, 2025: Vietnam's National Assembly passes Law on Artificial Intelligence
- January 22, 2026: South Korea's Framework Act enters into force
- March 1, 2026: Vietnam's AI Law takes effect; 18-month compliance window opens for healthcare, education, and finance AI systems; 12-month window for other sectors
- March 2026: Second in-person meeting of the HAIP Friends Group in Tokyo agrees the 2026 Action Plan
For AI for Legal Professionals, the fragmentation documented across European AI training regulations now has a parallel structure emerging in Asia. Compliance teams managing AI systems across multiple jurisdictions will need to track not only the binding obligations in each market but also the voluntary frameworks and soft accountability mechanisms that shape competitive positioning.
Your membership also unlocks: