When Vietnam's Decree 147/2024/ND-CP took effect on December 25, 2024, it rewrote the rules of engagement for every major cross-border platform operating in the country. TikTok, Meta's Facebook, and YouTube must now verify users with real identities, remove flagged content within 24 hours, store certain data locally if they cross user-base thresholds, and disclose how their recommendation algorithms amplify content to Vietnamese audiences. Throughout 2025, the Ministry of Information and Communications (MIC) escalated enforcement, and the new Personal Data Protection Law — which includes provisions on automated decision-making — has stitched a second layer onto the regime.
On paper, this looks like the kind of "algorithmic accountability" agenda that policymakers from Brussels to Brasília have embraced. In practice, Decree 147 is a cautionary tale about what happens when legitimate concerns about recommender systems get fused with a political project of state-level information control. The diagnosis is partly right. The prescription is dangerous.
What Decree 147 Actually Does
The decree expands and replaces parts of Decree 72/2013 and Decree 27/2018, which had governed online information in Vietnam for over a decade. Several requirements stand out:
- Real-name verification. Users must register accounts with verified phone numbers or national IDs. Anonymous and pseudonymous accounts lose access to features such as livestreaming and monetization.
- 24-hour takedown windows. Platforms must remove content that authorities flag as illegal within 24 hours — and within 3 hours for content deemed an urgent threat to national security.
- Local data storage. Services that cross specified thresholds of Vietnamese users or transactions must store user data within Vietnam and maintain a local representative.
- Algorithmic disclosure. Platforms must provide regulators with information about how recommendation systems surface and amplify content to Vietnamese users.
- Livestream registration. Only verified accounts above defined thresholds can livestream commercially, with platform-level oversight obligations.
According to reports from Reuters and the Associated Press, MIC officials have repeatedly cited "toxic" and "false" content amplified by algorithms as the core justification, and have publicly pressed TikTok and Meta over recommendation behavior.
The Legitimate Part of the Concern
The worry that recommender systems can amplify low-quality, sensational, or harmful content is not invented. It animates the EU's Digital Services Act, which requires very large platforms to assess systemic risks from their recommendation systems and offer non-personalized options. It animates the UK Online Safety Act's duties around algorithmic risk for children. It informs ongoing debates in India and Brazil.
If Decree 147's algorithmic transparency provisions were narrowly drafted, independently audited, and paired with strong rule-of-law guarantees — they would not be unusual. Asking a platform that operates at population scale to explain, in broad terms, how its ranking systems work is a reasonable ask in 2026. The DSA does this. Singapore's Online Safety code does a milder version of this.
Where It Goes Wrong
The problem with Vietnam's approach is not that it touches algorithms. It is the surrounding architecture.
First, the substantive standard for takedowns is vague and politically elastic. Vietnam's Penal Code articles 117 and 331 — on "anti-state propaganda" and "abusing democratic freedoms" — have repeatedly been used against bloggers, journalists, and ordinary users, as documented by Human Rights Watch and the U.S. State Department's annual human rights reports. When those same standards are loaded into a 24-hour takedown clock, platforms face a choice between mass over-removal and regulatory retaliation. Almost any rational compliance team will over-remove.
Second, real-name verification at this scale is not a neutral identity feature. It is a chilling instrument. It eliminates the breathing room that pseudonymity provides for dissidents, whistleblowers, LGBTQ users, and ordinary people who simply do not want their offline identity tied to every online utterance. Empirical research on South Korea's short-lived real-name system in the early 2010s found it did little to reduce harmful behavior and was struck down by the country's Constitutional Court in 2012.
The Algorithmic Disclosure Trap
Third, and most relevant to the algorithmic-accountability framing: disclosure obligations to a regulator that lacks independence, due-process constraints, and judicial review do not produce "accountability." They produce leverage. Once MIC knows the levers of a recommendation system, it can — and reports suggest it has — pressure platforms to tune those levers in politically convenient directions, without ever issuing a formal order that could be challenged.
That is the opposite of what algorithmic accountability is supposed to deliver. The EU model at least channels disclosure to independent regulators, requires public risk assessments, and is overseen by courts. Vietnam's framework keeps the disclosure but strips the safeguards.
A Better Path
Vietnam is a young, online, entrepreneurial country with a thriving creator economy and an emerging digital export sector. It has real interests in tackling fraud, scams, and child safety harms — concerns shared by every government. A proportionate regime would:
- Define illegal content narrowly and align it with international human-rights standards, rather than political-speech offences.
- Use staged takedown timelines tied to the severity and clarity of the harm — hours for CSAM, days for ambiguous speech, with judicial backstops.
- Treat algorithmic transparency as a public obligation (researcher access, audit reports) rather than a private channel to the executive.
- Drop blanket real-name mandates and instead require identity verification only where the activity itself demands it — payments, advertising, large-scale monetized broadcasting.
- Avoid hard data-localization requirements that fragment the internet and raise compliance costs without measurable security gains.
Vietnam can have a modern platform-governance regime that addresses real harms without importing the worst features of authoritarian information control. Decree 147, in its current form, is not that regime. It is a reminder that the language of algorithmic accountability can be borrowed by any system — and that the architecture of accountability, not just the slogan, is what determines whether users actually win.