Vietnam's AI Law Takes Effect With Broad Prohibitions and Fault-Based Liability
Vietnam became the first Southeast Asian country to enforce a comprehensive AI law on March 1, 2026. The Law on Artificial Intelligence establishes a risk-based framework that borrows from the EU's AI Act while imposing stricter liability rules than other regional frameworks.
The law arrived as Vietnam's Communist Party leadership pursues what it calls the "era of national rise"-a push toward high-income developed status by 2045, with technology as a central engine. The AI law is one of several digital regulations passed since 2024, following the Personal Data Protection Law and a revised Cybersecurity Law.
Fault-Based Liability Sets Vietnam Apart
Vietnam adopted a fault-based liability model rather than the EU's harm-based approach. Under this system, humans remain accountable for AI decisions in matters of social importance, even when the system operates autonomously.
The distinction matters for companies. A bank could deploy an autonomous AI system for loan decisions, but a human executive remains legally responsible for its performance. The law does not technically prohibit self-driving cars or other automation-it places liability with humans overseeing the system.
Industry groups flagged concerns about the rushed timeline. The law was drafted in three months, leaving limited time for stakeholder analysis. The Business Software Alliance and Computer and Communications Industry Association both called for extended implementation periods beyond the 12-18 month grace period for existing systems.
Broad Prohibitions, Flexible Enforcement
The law explicitly bans AI use for unlawful purposes, deepfakes intended to deceive, and materials threatening national security or public order. Unlike the EU's detailed prohibition list, Vietnam's intentional broadness grants authorities wide latitude in interpretation and enforcement.
This creates practical complications. If a user manipulates a chatbot to generate prohibited content, liability could fall on the AI company if the content is not properly labeled as AI-generated-even if the user violated the service's terms.
The Cybersecurity Law, effective in July 2026, reinforces these rules by prohibiting unlabeled AI-generated deepfakes. Last December, Vietnam sentenced a Berlin-based journalist in absentia to 17 years for posting AI-generated deepfakes of government leaders. That case illustrates how the AI and Cybersecurity laws will work together to define and punish prohibited content.
Risk Classification and Compliance Burdens
Companies must self-classify their AI products as high, medium, or low-risk and notify the Ministry of Science and Technology before deploying medium or high-risk systems. High-risk systems face routine audits, risk assessments, human oversight requirements, registration in a national database, and incident reporting obligations.
Foreign providers of high-risk systems must establish a local contact point in Vietnam. These requirements concern startups, which face administrative burdens that larger companies can absorb more easily.
The government has not yet released final versions of the implementing documents defining what qualifies as high-risk. The criteria will be updated annually by the Prime Minister, intended to keep pace with AI development-though officials acknowledge the difficulty of staying current.
Support for Domestic AI Industry
The law includes provisions to support Vietnamese startups and small-to-medium enterprises. These include plans for national AI infrastructure, a national AI database, human resource development programs, and financial incentives through an AI Development Fund.
For legal professionals implementing these rules, the law creates immediate compliance obligations. AI companies must label all AI-generated images, video, and audio. Deployers of AI systems bear responsibility for prohibited uses unless the content is properly labeled.
The uncertainty lies in implementation details. Vietnam's legislative approach sets general principles while relying on underlying directives for specifics. Public comment on draft implementing documents has closed, but final approved versions remain unreleased.
For companies operating across multiple jurisdictions, Vietnam's approach reflects a global trend. Regulatory frameworks for AI are spreading, and compliance with different national rules is becoming standard business practice.
Learn more about AI for Legal professionals navigating regulatory requirements, or explore an AI Learning Path for Paralegals covering document review automation and compliance automation relevant to AI law implementation.
Your membership also unlocks: