Vietnam's First AI Law: What Legal Teams Need to Know by March 1, 2026
Vietnam has approved a comprehensive AI Law that sets clear ground rules for how AI is built, deployed, and overseen in the country. The framework is risk-based, innovation-friendly, and extraterritorial-meaning it applies to any AI system that impacts people, markets, or national interests in Vietnam, regardless of where the provider sits.
For in-house counsel and compliance leaders, this is a near-term planning item. The law takes effect on March 1, 2026, with a 12-month transition for systems already operating in Vietnam.
Scope, governance, and timing
- Extraterritorial scope: Obligations apply to local and foreign organizations whose AI systems affect Vietnam.
- Central authority: The Ministry of Science and Technology (MoST) leads implementation and oversight.
- Transition period: Existing AI systems get 12 months from March 1, 2026 to comply, provided no substantial risk is found by authorities.
- Legal precedence: Where conflicts arise, this Law prevails over other laws on the same issue. If another instrument provides more favorable incentives, eligible entities may choose those.
Guiding principles (seven pillars)
- Human-centric: Human supervision and accountability remain mandatory, especially for consequential decisions.
- Safety, fairness, transparency, accountability: Prevent discrimination, ensure traceability, and define responsibilities for harm.
- National autonomy with openness: Strengthen domestic tech and data capabilities while engaging with global norms.
- Inclusive and sustainable: Support socio-economic goals, protect the environment, and respect cultural identity.
- Balanced policy: Manage trade-offs between innovation, oversight, and social interests.
- Risk-based oversight: Regulatory burden scales with demonstrated risk.
- Promotion of innovation: Enable research, startups, and commercialization through a supportive legal setting.
Risk classification and core obligations
- Unacceptable risk (prohibited): Systems that threaten national security, human dignity, or social order, including manipulative behavior-tech, large-scale facial recognition without consent, and destabilizing deepfakes.
- High risk: AI used in finance, healthcare, education, infrastructure, and justice. Requires pre-market government assessment, registration, and ongoing oversight.
- Medium risk: Interactive or generative systems. Must ensure transparency, user awareness, labeling, and feedback channels.
- Low risk: Minimal impact systems. Allowed self-regulation subject to general principles and post-market monitoring.
Providers must run formal risk assessments and maintain technical documentation. High-risk systems face conformity assessments, registration in the National AI Database, human oversight measures, and incident reporting. Foreign providers must appoint a local legal representative in Vietnam.
Application restrictions
- Real-time biometric surveillance in public spaces without special approval.
- Large-scale facial recognition databases built through unauthorized scraping.
- AI designed to deceptively manipulate public opinion or behavior.
The structure looks familiar to global counsel following the EU AI Act, but with stronger emphasis on national sovereignty, data autonomy, and cultural stability.
National AI infrastructure and data governance
The Law establishes a National AI Database as a centralized platform to register and monitor AI systems deployed in Vietnam. High-risk providers must implement risk management programs, training data governance, technical documentation, and operational logging.
Human oversight is not optional. Providers must build it into system design and operation, alongside transparency and incident-handling duties.
Transparency and content labeling
Users must be informed when interacting with AI, unless an exception is set by law. Audio, image, and video generated by AI require machine-readable identification markers to distinguish them from authentic content.
Where public AI-generated content could confuse audiences about authenticity, deployers must add clear notices and distinct labels, especially when simulating real people or real events. Artistic and cinematic works may have flexible labeling, but content must remain clearly distinguishable.
Incident reporting and response
Developers, providers, deployers, and users share responsibility for safety, security, and reliability. On serious incidents, developers and providers must apply technical fixes, suspend or withdraw affected systems if needed, and notify authorities.
Deployers and users must record, report, and cooperate in remediation. All reporting flows through a national one-stop electronic portal for AI, with detailed procedures to be issued by the government.
Preferential mechanisms and incentives
- AI as capital: Models, algorithms, and data assets can be recognized as capital contributions, unlocking new financing options.
- Top-tier incentives: AI enterprises are eligible for the highest incentives under science, high-tech, digital transformation, and investment laws.
- National AI Development Fund: Grants, loans, and preferential financing for domestic startups, SMEs, and foreign investors building in Vietnam.
- Regulatory sandboxes: Controlled testing environments with relaxed conditions to speed responsible market entry.
Enforcement and penalties
Penalties may be set as a percentage of global revenue, creating strong deterrence for cross-border actors. Expect scrutiny for high-risk deployments, especially in sensitive sectors and critical infrastructure.
Sector impact: where to expect movement
- Healthcare: Diagnostics and patient management can gain priority access to infrastructure and funding, with strict pre-market checks.
- Finance and fintech: Credit scoring, fraud detection, and digital payments benefit from clearer rules that build user trust.
- Manufacturing and logistics: Automation and optimization align with Vietnam's industrial upgrade goals and export focus.
- Public services: E-government and smart city initiatives will leverage AI via public-private partnerships and cluster programs.
Action plan for legal and compliance teams
- Inventory AI systems touching Vietnam and map each to the four risk tiers.
- Run gap assessments against the Law's documentation, data governance, and human oversight requirements.
- Decide on the local legal representative for foreign providers; define authority and escalation paths.
- Prepare for conformity assessments and registration in the National AI Database for high-risk systems.
- Design an incident response playbook tied to the national AI portal; set internal SLAs and notification triggers.
- Implement machine-readable content markers and user disclosures for generative outputs; update UX and content pipelines.
- Update vendor and customer contracts with compliance warranties, audit rights, data obligations, and indemnities.
- Tighten training data governance, logging, model documentation, and change management.
- Evaluate sandbox participation for new products; use it to de-risk market entry and clarify regulator expectations.
- Pursue incentives: assess eligibility for the National AI Development Fund and sector-specific benefits.
How it compares internationally
Vietnam's model mirrors leading risk-based frameworks while reinforcing national autonomy in data and infrastructure. For reference on comparable structures, see the EU AI Act in the Official Journal.
Policy details and implementation updates are expected from MoST; keep an eye on the Ministry of Science and Technology for forthcoming guidance.
Capability building
Compliance will live or die on process and skills. If your team needs structured upskilling by role, these resources can help: AI courses by job function.
Bottom line
Vietnam is moving AI from experimentation to accountable deployment. If you integrate compliance into product design, participate in sandboxes, and build local partnerships or R&D, you'll convert regulatory work into durable market access-and do it on time.
Your membership also unlocks: