Vietnam Advances Comprehensive AI Law for 2025 to Compete in the Global AI Race

Vietnam plans a risk-based AI law by 2025 with strict transparency, labeling, and human oversight. Legal teams should inventory AI, rate risks, and tighten data and vendor terms.

Categorized in: AI News Legal
Published on: Sep 28, 2025
Vietnam Advances Comprehensive AI Law for 2025 to Compete in the Global AI Race

Vietnam's Draft AI Law: What Legal Teams Need to Prepare Before 2025

Vietnam is moving toward a comprehensive AI law, with a draft slated for submission to the National Assembly by the end of 2025. The message at AI4VN 2025 was clear: AI will be encouraged, but guardrails will be firm. For in-house counsel and law firm practitioners, this is the window to build compliance muscle before obligations land.

Core Principles That Will Drive the Statute

  • Human-centred development: People-first outcomes and human control in sensitive use cases.
  • Safety and transparency: Clear disclosures, including labelling AI-generated content.
  • Inclusiveness and sustainability: Broad access and long-term impact in view.
  • Balanced governance: Risk-based oversight to focus on high-risk systems.
  • Harmony: Policy coherence across sectors and stakeholders.

Expect a risk-tier model with stricter obligations for high-risk systems, mirroring global practice. If you work with regulated sectors-education, health care, finance-plan for human-in-the-loop and documented oversight.

Transparency Will Be Non-Negotiable

Clear labelling to distinguish human- from machine-generated content is flagged as a key feature. Legal teams should plan disclosures, user notices, and content policies that are visible, consistent, and testable. Consider watermarking, provenance metadata, and internal guidance on synthetic media.

Signals From Government Practice

The Ministry of Science and Technology (MoST) is piloting AI in drafting, analysis, and process automation. This points to procurement standards, model evaluation, and record-keeping requirements likely showing up in the final law and in public-sector contracts. Expansion into smart cities and public services suggests forthcoming rules on safety, auditing, and public accountability.

Ethics and Accountability in Sensitive Sectors

Final decisions remain with humans in education, health care, and finance. Build decision logs that show when humans reviewed and overrode AI outputs. Ensure your risk assessments treat bias, safety, and explainability as first-class issues, not afterthoughts.

International Alignment Matters

Vietnam's cooperation with Australia and the issuance of responsible AI R&D guidelines in June 2024 show alignment with global norms. For framing, review the EU's risk-based approach and the OECD AI Principles:

What Counsel Should Do Now

1) Build Your AI System Inventory

  • Catalogue all AI uses: internal tools, vendor-provided features, prototypes, and shadow IT.
  • Tag each by purpose, data types, user groups, and potential impact.
  • Identify sensitive-sector deployments and any automated decision-making affecting individuals.

2) Pre-Classify by Risk

  • Mark candidates for "high-risk" treatment based on use case, impact, and sector.
  • Prepare to justify the classification with a short memo and criteria.

3) Prepare Transparency and Labelling

  • Draft standard disclosures for AI assistance, recommendations, and AI-generated content.
  • Add UX labels, output watermarks, and provenance cues where feasible.
  • Create a policy for handling synthetic media in marketing, customer service, and legal evidence.

4) Implement Human Oversight and Record-Keeping

  • Define decision checkpoints where a human must review or approve outputs.
  • Log model versions, prompts, key decisions, and overrides for auditability.
  • Stand up incident reporting for safety, privacy, and security events.

5) Tighten Data Governance

  • Document data sources, consent basis, retention, and cross-border transfers.
  • Set rules for training data, fine-tuning, and synthetic data generation.
  • Run impact assessments that cover bias, security, and content risks.

6) Update Contracts and Vendor Diligence

  • Insert AI-specific clauses: disclosures, audit rights, security, data rights, IP indemnities, and model-change notices.
  • Require evidence of testing, monitoring, and compliance for high-risk tools.

7) Align Compliance Across Functions

  • Coordinate with privacy, cybersecurity, and procurement on a shared control set.
  • Train product, engineering, and operations on disclosure, escalation, and documentation.

Policy Context to Track

MoST's responsible AI guidance (June 2024) is an early signal for research and product teams. The Australia-Vietnam Strategic Technology Centre (launched 2025) prioritises AI, 5G/6G, and cybersecurity, indicating continued cross-border cooperation and knowledge transfer. Expect further consultation rounds as the draft law moves closer to submission.

For Business Leaders: Opportunity With Guardrails

The government is offering preferential policies to encourage AI R&D. Competitive advantages-cost structure, policy support, growth-are real, but so are constraints: infrastructure, talent, legal clarity, and R&D spend. Counsel should balance enablement with enforceable controls to keep projects shippable.

Event Takeaways From AI4VN 2025

  • Risk-based law with clear transparency rules is on the way.
  • Public-sector AI adoption will set the tone for procurement and audits.
  • International collaboration is active and ethics remains central.
  • AI infrastructure and agent readiness are priorities for enterprises.

Next Steps

  • Set up a policy watch for MoST updates and consultation drafts.
  • Pilot a "high-risk" compliance pack: impact assessment, testing plan, human oversight checklist, and disclosure templates.
  • Run a tabletop exercise for AI incident response, including content labelling failures and model drift.

If your legal, risk, or compliance teams need focused AI governance upskilling, you can review role-based options here: Courses by Job.