Vietnam to Update AI Strategy and Enact AI Law by 2025, Building National AI Infrastructure

Vietnam to update its AI strategy and pass an AI Law by end-2025, with AI infrastructure, shared data, and human oversight. Counsel: prep for tighter rules on data, risk, security.

Categorized in: AI News Legal
Published on: Sep 21, 2025
Vietnam to Update AI Strategy and Enact AI Law by 2025, Building National AI Infrastructure

Vietnam to update national AI strategy and AI Law by late 2025: what legal teams need to know

Vietnam plans to publish an updated national AI strategy and a new AI Law by the end of 2025. The policy line is clear: treat AI as core infrastructure, build national compute and data assets, and raise competitiveness while keeping humans in control.

Senior officials have flagged a national AI supercomputing center, shared open AI data, and a risk-aware approach that keeps decisions with people. Advisors also point to "technology diplomacy" and "data diplomacy," signaling cross-border cooperation and standards will matter as much as local rules.

Why this matters for counsel

The 2021 AI strategy is being refreshed to match generative AI at scale and intensifying international competition. Expect a broader legal perimeter: data governance, safety, accountability, cybersecurity, procurement, and sector-specific rules tied together by a national data architecture.

Vietnam is already among ASEAN's leaders in AI readiness and adoption. A formal AI Law will set expectations for how organizations build, buy, deploy, and audit AI systems in both public and private sectors.

Signals from officials: emerging pillars of the AI framework

  • AI as infrastructure: National supercomputing capacity and common data resources to lower entry barriers and standardize practices.
  • Data development and architecture: Clear guidance for ministries and agencies on system design, connectivity, metadata, and access to shared datasets.
  • Human oversight: AI as an assistant; humans remain the decision-makers with defined accountability.
  • Alliances and interoperability: Technology and data diplomacy to enable cooperation, standards alignment, and lawful data sharing.
  • Fast, safe, humane deployment: Risk awareness, security-by-design, testing, and monitoring likely to be front and center.

Rising risk: AI-driven cyberattacks

At a recent summit in Ho Chi Minh City, hundreds of security leaders exchanged methods and tools for AI in information security-on both the defense and attack sides. This trend points to stricter expectations around red-teaming, incident response, logging, and supplier assurance for AI systems.

What to prepare now

  • Map your AI footprint: Inventory models, use cases, datasets, third-party providers, and data flows (training, fine-tuning, inference).
  • Classify risks: Flag high-impact uses (e.g., employment, credit, healthcare, public services). Require pre-deployment assessments and approvals.
  • Data governance: Define legal bases for collection and use, consents where required, minimization, retention, and access controls. Track synthetic data usage and provenance.
  • Model lifecycle controls: Establish testing, bias and performance metrics, explainability standards where feasible, monitoring, and rollback plans.
  • Security-by-design: Threat models for prompt injection, data leakage, model theft, supply-chain risks, and shadow AI. Mandate logging and incident reporting.
  • Human oversight and accountability: Specify decision rights, human review thresholds, and escalation. Document who is responsible for outcomes.
  • Contract terms with AI vendors: Training data rights and restrictions, IP ownership of outputs, confidentiality, audit rights, security controls, bias testing, uptime/SLA, incident notice, and indemnities (IP infringement and security).
  • Procurement and audits: Prepare for public-sector style due diligence: data sheets, model cards, evaluation reports, and conformity evidence.
  • Cross-border data: Plan for transfer impact assessments, localization requirements (if any emerge), and processor/sub-processor transparency.
  • Policy and training: Clear acceptable use policies, approval gates for new tools, and role-based training for developers, product owners, and reviewers.
  • Engage early: Participate in consultations, pilot sandboxes, and standards groups. Align internal policies with recognized frameworks like the OECD AI Principles.

Sector angles to watch

  • Financial services: Model risk management, explainability, fairness, and consumer disclosure will draw scrutiny.
  • Healthcare: Data consent, clinical validation, and post-market surveillance for AI-enabled tools.
  • Employment and education: Bias controls, transparency to affected individuals, and appeal mechanisms for automated decisions.
  • Public sector: Procurement conditions, logging, auditability, and responsible use of national datasets.

The message from leadership is consistent: scale AI access, build shared infrastructure, and keep people in charge. For legal teams, that translates into clear accountability frameworks, audit-ready documentation, and disciplined vendor management.

Timeline and next steps

The updated strategy and AI Law are slated for release by end-2025. Use the lead time to close gaps: inventory, risk classification, contracts, security controls, and governance committees. Early movers will have fewer surprises when the law lands.

If you need a fast way to brief non-legal stakeholders on AI capabilities by role, see this curated index: AI courses by job.