On January 1, 2026, Vietnam became one of the first countries in Southeast Asia to operate under a dedicated, horizontal AI legal framework. The Law on Digital Technology Industry (Luật Công nghiệp Công nghệ số), passed by the National Assembly in mid-2025, sets out Vietnam's first statutory rules for artificial intelligence alongside a sweeping package of incentives for semiconductors, data centers, and digital-economy investment. It is, in effect, two laws stitched together: a risk-based AI rulebook borrowed in spirit from the EU AI Act, and an industrial policy designed to attract the very companies that would be regulated by it.
For a country whose digital economy is projected to account for a rapidly growing share of GDP under the government's own targets, this dual posture matters. Vietnam is trying to signal seriousness on AI safety without choking off the foreign investment — from chipmakers, hyperscalers, and AI startups — that its growth strategy depends on. Whether that balance holds will depend less on the statute itself than on the implementing decrees still being drafted by the Ministry of Information and Communications (now the Ministry of Science and Technology following the 2025 government reorganization).
What the law actually does
The AI chapter of the law introduces three core mechanisms familiar to anyone who has read Brussels' AI Act:
- Risk classification. AI systems are categorized by risk level, with the highest-risk uses — those affecting health, safety, and fundamental rights — subject to stricter pre-deployment obligations including documentation, testing, and human oversight requirements.
- Mandatory labeling of AI-generated content. Synthetic media, including deepfakes and AI-generated text, images, and audio, must be identifiable as such. This aligns Vietnam with similar transparency rules now appearing in the EU, China, and South Korea.
- Prohibited practices. Certain AI uses — broadly mirroring the EU's banned categories around manipulation, exploitation of vulnerabilities, and untargeted facial-recognition scraping — are off-limits regardless of risk classification.
Crucially, the law also codifies an AI sandbox regime: a structured environment in which firms can test novel systems with regulator oversight but reduced compliance friction. Sandboxes have become a quietly important feature of credible AI regulation, and Vietnam's inclusion of one suggests the drafters understood that pure command-and-control would deter the startups they want to attract.
The other half: industrial policy
The same law contains generous incentives for semiconductor design, chip packaging, AI research, and digital infrastructure. These include preferential tax treatment, land-use support, and streamlined investment approvals — extending the playbook Vietnam has already used to attract electronics manufacturing. Samsung, Intel, and Amkor have all expanded chip-related operations in Vietnam in recent years, and the government has explicitly courted firms diversifying away from concentrated China-Taiwan supply chains.
This is the more interesting story. Few jurisdictions have so explicitly bundled AI guardrails with subsidies for the underlying compute and silicon stack. The EU AI Act and the bloc's Chips Act are separate instruments; Vietnam has chosen to legislate them as one.
The pro-innovation read
From an innovation perspective, there is much to like about Vietnam's approach — and a few warning signs worth flagging.
On the positive side, the law avoids the licensing-heavy approach taken by some Asian peers. It does not require pre-approval for general-purpose AI models, and the sandbox mechanism provides a path for novel applications. The decision to combine AI rules with semiconductor incentives also signals a coherent industrial strategy rather than reactive regulation. For a middle-income economy trying to climb the value chain, this is the right instinct.
The risks lie in implementation. "Risk classification" is only as proportionate as its thresholds: if too many ordinary business uses end up captured as high-risk, compliance costs will fall hardest on Vietnamese SMEs that cannot afford EU-style documentation regimes. The prohibited-practices list, if interpreted expansively, could collide with legitimate research uses. And mandatory content labeling — while sensible in principle — needs technical standards that interoperate with global watermarking initiatives (such as C2PA) rather than creating a Vietnam-specific compliance silo.
There is also the underlying speech question. Vietnam's broader information environment includes the 2018 Cybersecurity Law and Decree 147 of 2024, which already impose significant content-moderation obligations on platforms. Layering AI labeling rules on top of that architecture creates a risk that "AI-generated content" obligations become a vector for broader takedown demands. The proportionate path is to keep the labeling rule narrowly technical — a transparency obligation — and not let it bleed into content liability.
What to watch
The substantive impact will be set by the implementing decrees, expected through 2026. Three things deserve attention:
- How the high-risk thresholds are calibrated, and whether they default to EU definitions or carve out lighter-touch categories for domestic startups.
- Whether the sandbox is genuinely accessible — with clear entry criteria and reasonable timelines — or becomes a discretionary gate.
- How labeling obligations interact with platform liability rules under Decree 147 and the Cybersecurity Law.
Vietnam has, for now, made the right strategic bet: serious enough on guardrails to be taken seriously internationally, generous enough on incentives to keep capital flowing. Whether that bet pays off is a matter of regulatory craftsmanship over the next twelve months.