Vietnam's Draft AI Law Sets the Pace: Risk Tiers, Timelines, and What Businesses Should Do Now

Vietnam's draft AI law brings a risk-based regime with clear duties, approvals, and a national database-even for foreign providers. Act early to cut friction and tap incentives.

Categorized in: AI News Legal
Published on: Oct 30, 2025
Vietnam's Draft AI Law Sets the Pace: Risk Tiers, Timelines, and What Businesses Should Do Now

Vietnam Draft AI Law: Regulatory Milestone and Business Implications

Vietnam is moving early on AI governance. The Draft Artificial Intelligence Law sets out a comprehensive, risk-based framework with clear duties for developers, deployers, and foreign providers whose systems impact Vietnam.

For legal teams, this is a chance to get ahead: align your AI portfolio with the draft now, and you'll reduce approval friction, secure incentives, and avoid costly remediation later.

Scope, Guiding Principles, and Governance

The law applies to any organization-local or foreign-whose AI systems affect users, markets, or national interests in Vietnam. Its extraterritorial reach mirrors global practice and prevents avoidance by operating offshore.

Seven principles anchor the regime:

  • Human-centric: Human supervision remains ultimate. AI supports, not replaces, critical decisions.
  • Safety, fairness, transparency, accountability: Build for reliability, non-discrimination, traceability, and clear liability.
  • National autonomy with international integration: Strengthen local tech, infrastructure, and data while cooperating globally.
  • Inclusive and sustainable: Equitable benefits, environmental care, and cultural integrity.
  • Balanced policymaking: Guardrails without stalling innovation.
  • Risk-based management: Oversight scales with potential harm.
  • Promotion of innovation: A supportive legal environment for R&D, startups, and commercialization.

Oversight will be centralized under the Ministry of Science and Technology and a National AI Commission, slated for setup by July 1, 2026. Expect unified enforcement, a national AI database, pre-approvals for high-risk uses, and state-issued technical standards.

Risk-Based Classification and Core Obligations

The draft adopts a four-tier model aligned with global trends (see the EU's approach for reference). Compliance scales with risk.

  • Unacceptable risk: Prohibited. Examples: cognitive manipulation, mass facial recognition without consent, destabilizing deepfakes.
  • High risk: Pre-market approval required for use in finance, healthcare, education, infrastructure, justice, and similar sectors.
  • Medium risk: Transparency duties. Clear AI labels, user notice, and feedback mechanisms.
  • Low risk: Self-regulation with post-market monitoring.

Common duties include risk assessments and detailed technical documentation. High-risk systems require conformity assessments, registration in the National AI Database, human oversight measures, and incident reporting. Foreign providers must appoint a local legal representative as the compliance interface.

Application Restrictions

Bans target systemic threats: real-time biometric surveillance in public spaces (absent special approval), large-scale facial recognition via unauthorized scraping, and deceptive systems designed to manipulate public opinion or behavior.

The model is risk-based like the EU's, but with stronger emphasis on national sovereignty, data and infrastructure autonomy, and cultural stability.

National AI Infrastructure, Data Governance, and Incentives

The National AI Database will register and monitor AI systems, especially high-risk ones, enabling authorities to track conformity and intervene when needed. Entities contributing data and registering high-risk systems may receive preferential treatment.

Financing gets a boost: AI models, algorithms, and data assets can count as lawful capital contributions. The draft also provides for AI Clusters offering shared infrastructure, tax and land-use incentives, and government-backed research facilities.

A National AI Development Fund will supply grants, loans, and preferential financing to domestic startups, SMEs, and foreign investors. Regulatory sandboxes allow controlled testing with relaxed conditions to speed responsible market entry.

Business Implications and Sector Impact

Legal teams should plan for more rigorous governance around high-risk AI: data security controls, impact assessments, human oversight, and incident response. Foreign providers will need a local representative. Penalties may reach a percentage of global revenue-expect enforcement with teeth.

Opportunities are clear:

  • Healthcare: Diagnostics and patient management may gain priority access to infrastructure and funding.
  • Finance/Fintech: Credit scoring, fraud detection, and digital payments under a regime that strengthens consumer trust.
  • Manufacturing/Logistics: Automation to support industrial upgrading, supply chain efficiency, and export goals.
  • Public services: E-government and smart city initiatives supported via PPPs and AI clusters.

Localization and integrated R&D will matter. Early participation in sandboxes and clusters can convert compliance spend into first-mover advantage.

Implementation Timeline

  • From Jan 1, 2026: Regulatory infrastructure and initial implementation framework.
  • From Jul 1, 2027: Full obligations for high-risk AI systems.
  • From Jul 1, 2029: Legacy high-risk systems begin transition; 24 months to register and achieve conformity.

Action Checklist for Legal Teams

  • Inventory AI systems, vendors, and data flows that affect Vietnam. Map each to risk tiers.
  • Build pre-market approval files for high-risk systems: technical docs, risk/impact assessments, human oversight design, incident workflows.
  • Prepare for registration in the National AI Database. Assign owners and record-keeping duties.
  • Contract for compliance: add clauses on audit, transparency, traceability, security, and incident cooperation. Include data provenance and consent warranties.
  • Appoint a local legal representative (for foreign providers) and define escalation paths with authorities.
  • Leverage incentives: assess eligibility for AI Clusters, the National AI Development Fund, PPP opportunities, and sandbox pilots.
  • Plan for enforcement risk: budget for conformity assessments and potential revenue-based penalties.

Strategic Outlook

This draft moves AI from a largely unregulated space to a state-guided strategic industry. The message is clear: build responsibly, document thoroughly, and you'll get a predictable path to market plus access to infrastructure and financing.

Use the timeline now to rework product roadmaps, procurement, and vendor governance. Early movers will shape standards and secure preferential access.

References and Further Reading

If your legal or compliance team needs practical training on AI governance and tooling, explore role-based programs here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)