Vietnam's AI Law: What HR and Legal Teams Need to Know Now
Vietnam has moved fast on AI governance. The Artificial Intelligence Law, effective March 1, sets a national rulebook that prioritizes people, safety, and accountability-while still backing innovation and growth.
If you build, buy, or use AI in Vietnam, this affects your policies, vendor contracts, employee training, and compliance processes. Below is a clear breakdown and a practical checklist to get your house in order.
Scope and who's covered
- Applies to all Vietnamese agencies, organizations, and individuals, plus foreign entities operating AI in Vietnam.
- Covers AI from research and development through deployment and use within Vietnam's territory.
- Excludes AI activities used exclusively for national defense, security, and cryptography.
- Affirms sovereignty over cross-border AI activities impacting Vietnam.
Core principles you'll be held to
- AI must serve human beings-never replace human authority or responsibility.
- Safety, transparency, accountability, and protection of lawful rights are mandatory.
- National interests and public interests must be safeguarded.
- Green, inclusive, and sustainable AI development is encouraged and supported.
Risk-based obligations (high, medium, low)
The Law adopts a tiered oversight model to focus controls where risks are higher.
- High-risk systems: Likely to cause significant harm to life, health, lawful rights, national/public interests, or security. Obligations include conformity assessments, technical documentation, operational logs, and human intervention capability.
- Medium-risk systems: Must ensure transparency and provide explanations on request by competent authorities.
- Low-risk systems: Subject to oversight when signs of violations are detected.
For Legal and HR: map every in-scope tool to a risk level, assign owners, and document controls before deployment.
Content transparency and deepfake controls
- AI-generated audio, images, and video must be labeled or clearly marked so users can recognize their artificial origin.
- Deliberate and systematic use of fabricated or simulated elements of real persons or events to deceive or manipulate people is prohibited.
- Providers must describe intended use, operating principles, data sources, and risk controls-without disclosing source code, algorithms, parameters, or trade/tech secrets.
Update your content policies and employee guidelines to include labeling rules, review workflows, and escalation paths for suspected synthetic media.
National AI infrastructure and data
- Vietnam will develop unified, open, and secure national AI infrastructure with support from the State, enterprises, and social organizations.
- Organizations and individuals are encouraged to make AI-related data available to state agencies and others by agreement.
- The Prime Minister will issue a list of priority datasets for AI development, with emphasis on cultural data, Vietnamese and minority languages, and data on administration, healthcare, education, agriculture, environment, transport, and socio-economic development.
- A single-window AI portal and a national database on AI systems will support sandbox registration and disclosure of AI system information.
Monitor official updates for portal access and dataset lists via the Ministry of Information and Communications (MIC).
Innovation, incentives, and procurement
- The national AI strategy will be reviewed and updated at least every three years or when technology/market shifts occur.
- Organizations in the AI ecosystem can access the highest incentives under science and technology, investment, digital industry, high technology, and digital transformation laws.
- Support includes access to infrastructure, data, and testing environments for R&D and commercialization.
- The State will prioritize AI products and services in public procurement, and a national AI development fund will be established.
Ethics and human rights
- A national AI ethics framework will set principles on safety, fairness, transparency, and respect for human rights.
- Education policy includes fundamental AI content, computational thinking, digital skills, and technology ethics at the general education level, with encouragement for AI and data programs in vocational and higher education.
For benchmarking on global ethics baselines, see UNESCO's recommendation on AI ethics: UNESCO AI Ethics.
What HR and Legal should do now
- Inventory and classify: List all AI systems in use or planned. Assign risk tier (high/medium/low) and responsible owners.
- Run assessments: For high-risk, prepare conformity assessments, technical documentation, logging, and human-in-the-loop controls.
- Update policies: Add AI-use rules, content labeling requirements, and prohibited practices (e.g., deceptive synthetic media).
- Contract with vendors: Require transparency on intended use, operating principles, data sources, risk controls, and incident reporting. Ensure cooperation with authorities.
- Employee training: Train teams on labeling AI content, escalating deepfake concerns, and responsible AI use, including bias, safety, and privacy.
- Data governance: Document data sources, consent bases, retention rules, and access controls for AI training and inference.
- Incident readiness: Stand up processes to pause models, intervene manually, and log events for audits.
- Cross-border oversight: Confirm that offshore providers and processing respect Vietnam's jurisdiction and AI obligations.
- Procurement alignment: For public-sector work or bids, align offerings with national priorities and labeling duties.
Checklist for high- and medium-risk systems
- Conformity assessment plan and evidence (for high-risk).
- Technical documentation covering intended purpose, operating principles, data sources, and risk controls.
- Operational logs configured and monitored.
- Human intervention and override capability tested and documented.
- Transparent explanations available on request (at minimum for medium-risk).
- Content labeling implemented where outputs include audio, image, or video.
Governance and recordkeeping
- Appoint accountable owners for each AI system and define approval gates before deployment.
- Maintain a live register of systems, risk levels, assessments, and controls.
- Set review cycles to match technology and market changes, and align with the Law's strategic review cadence.
Where legal teams can go deeper
For practical guidance, case studies, and tools aligned to compliance and risk-based AI operations, explore AI for Legal.
Bottom line
The AI Law sets clear expectations: put people first, make systems safe and transparent, and document how you manage risk. If you formalize these controls now, you reduce exposure and position your organization to benefit from national incentives and public procurement opportunities.
Your membership also unlocks: