Vietnam's first AI law adopts EU-style risk tiers with a pro-innovation edge

Vietnam's first AI law takes effect Mar 1, 2026, with a risk-based framework and clear transparency rules. Legal teams should classify systems early and prepare notifications.

Categorized in: AI News Legal
Published on: Feb 06, 2026
Vietnam's first AI law adopts EU-style risk tiers with a pro-innovation edge

Vietnam's first standalone AI Law: What legal teams need to know

Vietnam has moved fast. The Law on Artificial Intelligence was enacted 10 Dec. 2025 and takes effect 1 March 2026, replacing the AI provisions in the Law on Digital Technology Industry that briefly took effect on 1 Jan. 2026.

The new law adopts a risk-based approach and consolidates oversight under one framework. It mirrors several concepts from the EU AI Act while keeping room for local discretion.

Scope and intent

The law sets a pro-innovation posture while preserving human rights, privacy, national interests and security. It gives regulators flexible enforcement tools without locking into long annexes or static lists.

Core principles (Article 4)

  • Human control over AI decisions and outcomes.
  • Fairness, transparency, non-bias and accountability across the life cycle.
  • Compliance with Vietnam's Constitution and laws, and respect for cultural values.
  • Green, inclusive and sustainable development with energy efficiency in mind.

Prohibited acts (Article 7)

Baseline bans apply to all AI activities, regardless of risk level. The list is broad by design, giving authorities latitude to act against harmful uses.

  • Using AI for unlawful purposes or to infringe rights.
  • Simulating real people or events to deceive or manipulate perception.
  • Exploiting vulnerable groups or spreading forged material that threatens national security or public order.
  • Unlawful data processing that violates data protection, IP or cybersecurity laws.
  • Obstructing human oversight or concealing mandatory disclosures.

Risk classification and notification

AI systems fall into three tiers: high-risk (significant harm to life, health, rights or national security), medium-risk (risk of user confusion from undisclosed AI interactions or content), and low-risk (all others). Criteria include impact on rights and safety, number of users, scale of influence and sector sensitivity (e.g., health care).

  • Providers must self-classify before use or market placement.
  • Medium- and high-risk systems must be notified to the Ministry of Science and Technology through the national one-stop AI portal.
  • Reclassification is required if modifications raise risk.
  • Low-risk systems may make voluntary disclosures.

Detailed criteria and lists will be issued by the prime minister, allowing faster updates as risks change.

Roles and responsibilities (Article 3)

The law defines developers (design/training), providers (market placement), deployers (professional use), users (direct interaction) and affected persons (impacted parties). Nonmarket R&D is carved out to encourage experimentation.

Obligations primarily sit with providers and deployers. Developers carry certain incident duties but fewer ongoing compliance tasks.

Incident response (Article 12)

All parties must maintain safety, security and reliability, and proactively detect and fix harms. A "serious incident" covers events causing or risking significant damage to people, property, rights, cybersecurity or critical systems.

Developers and providers must apply fixes, suspend or withdraw systems when needed, and notify authorities via the one-stop portal. Deployers and users must log incidents, report promptly and support remediation. The law does not set exact timelines; a government decree will fill in process details.

Transparency duties (Article 11)

  • Providers: AI that interacts with people must be clearly recognizable as artificial. Generated audio, images and video require machine-readable markings.
  • Deployers: For public-facing text, audio, images or video that could mislead about real events or people, give clear notice. Clearly label simulated content imitating real persons, voices or events.
  • Entertainment/creative works should label in a way that doesn't disrupt viewing or enjoyment.
  • The government will standardize formats for notices and labels.

High-risk systems: audits, conformity and oversight

  • Periodic audits and post-market surveillance, with extra scrutiny in sectors like health care and education.
  • Mandatory measures: risk assessments, human oversight, registration in a national database and incident reporting.
  • Conformity certification is required for select systems on a prime minister-issued list.
  • For other high-risk systems, providers may self-assess or engage registered organizations.
  • Foreign providers of high-risk AI must set up a local contact point; some certified systems may require a commercial presence or authorized representative. Details will come by decree.

Medium- and low-risk oversight

  • Medium-risk systems: supervision via reports, sample audits or independent assessments.
  • Providers must be able to explain purposes, operating principles, key inputs and safety measures during audits or incident reviews. Source code, algorithms and trade secrets are expressly protected.
  • Deployers face similar accountability for operations, risk controls and incident response when inspected or after harms.
  • Low-risk systems: minimal obligations; encouraged to follow voluntary standards and basic disclosures.

Liability and penalties (Article 29)

  • Noncompliance can trigger administrative sanctions, penal exposure and civil damages. Specific fine ranges will be set by a government decree.
  • The law emphasizes fault-based allocation. For high-risk systems, deployers compensate victims first, then seek reimbursement from developers/providers under contract.
  • Exemptions apply for victim fault or force majeure.
  • If third parties hijack a system, providers/deployers share liability when at fault.

Incentives: funding, sandboxes and clusters

  • National AI Development Fund to support research and development.
  • Regulatory sandboxes with streamlined procedures and exemptions.
  • AI clusters in high-tech parks with tax and infrastructure benefits.
  • Preferences for enterprises that share data or models.

Grace periods for existing systems

  • Systems in market before 1 March 2026 get 12 months (until 1 March 2027).
  • Health care, education and finance get 18 months (until 1 Sept. 2027).
  • Authorities may suspend earlier if a system risks serious damage.

Timing partially matches the EU AI Act's rollout, with full compliance for many existing systems landing around mid-2027. For cross-border programs, this simplifies multi-jurisdiction planning.

Official text of the EU AI Act

Action checklist for legal and compliance

  • Inventory AI systems, map roles (developer/provider/deployer/user) and assign owners.
  • Pre-classify systems, document rationale and prepare notifications via the AI portal.
  • Build transparency notices and content labeling workflows, including machine-readable marks.
  • Stand up incident playbooks with reporting channels, suspension/withdrawal criteria and evidence logs.
  • Update contracts: warranties, audit rights, data/IP clauses, incident cooperation, and indemnities reflecting the deployer-first compensation rule.
  • Protect trade secrets while enabling audits; prepare summary documentation for medium/high risk.
  • Review data protection, IP and cybersecurity overlaps; align with Vietnam's PDPL and related laws.
  • For foreign providers, plan the local contact point and, if needed, an authorized representative.
  • Track decrees (risk criteria, certification lists, labels, sanctions) and submit comments during consultations.
  • Schedule internal audits and post-market monitoring for high-risk deployments.
  • Train legal, product and ops teams on risk tiers, disclosures and incident response.

If your team needs structured upskilling on AI risk and compliance practices, see curated options by role: AI courses by job.

Key dates

  • 10 Dec. 2025: AI Law enacted.
  • 1 Jan. 2026: AI provisions in the Law on Digital Technology Industry took effect (now superseded).
  • 1 March 2026: AI Law effective.
  • 1 March 2027: General grace period ends.
  • 1 Sept. 2027: Grace period ends for health care, education and finance.

Bottom line: classify early, document everything, and build practical controls you can evidence. The law rewards proportionate governance and makes room for updates, so stay close to the decrees that follow.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)