Thailand artificial intelligence regulation

Thailand's Two-Track AI Rulebook: Why Bangkok Should Resist Copy-Pasting Brussels

Thailand's draft Royal Decree on AI Services and parallel risk-based AI Act offer a chance to chart a proportionate path — if regulators avoid the EU's compliance overhead.

Thailand's AI Regulation: The Proportionality Questi… People of Internet Research · Thailand 2 Regulatory tracks proposed Royal Decree on AI Services plus a… 3+ Risk tiers in draft Act Prohibited, high-risk, and limited… 113 EU AI Act articles The scale of compliance overhead T… Growing ASEAN AI frameworks Singapore, Japan, and now Thailand… peopleofinternet.com

Key Takeaways

Thailand is quietly building one of Southeast Asia's most ambitious artificial intelligence regulatory architectures. Through 2024-2026, the Ministry of Digital Economy and Society (MDES) and the Electronic Transactions Development Agency (ETDA) have been advancing a two-track framework: a draft Royal Decree on AI System Services to govern providers in the near term, and a parallel risk-based AI Act that borrows heavily from the European Union's AI Act model. Public consultations have refined a tiered classification scheme that would sort AI systems into prohibited, high-risk, and limited-risk buckets — a familiar structure to anyone who has read Brussels' 2024 regulation.

The two-track approach is sensible. A lighter-touch service-business decree fills the interim regulatory gap without freezing innovation, while the longer Act gives Parliament time to debate the harder questions about general-purpose AI, foundation models, and liability. But as Thailand finalises this framework, policymakers should be candid about what the EU AI Act has — and has not — achieved, and design accordingly.

The EU model is a cautionary tale, not a template

Brussels' AI Act has already been criticised by European industry, startups, and even some of its original supporters for imposing compliance costs that fall hardest on smaller developers and non-EU firms hoping to serve European users. Conformity assessments, documentation requirements, fundamental-rights impact assessments, and the GPAI tier obligations layer on top of the GDPR, the Digital Services Act, the Digital Markets Act, and sectoral rules. The result is a compliance stack that large U.S. and Chinese labs can absorb, but that local SMEs and Thai startups would struggle to navigate.

Thailand should learn from this. The Royal Decree's lighter-touch approach — focused on registration, transparency, and risk disclosures for AI service providers — is closer to the proportionate model the region needs. ETDA's earlier work on the AI Ethics Guidelines (2022) and the AI Governance Guideline for Executives already established a soft-law foundation; the Decree extends that into binding but limited obligations without prematurely locking in EU-style ex ante conformity assessments.

Define risk narrowly, or define everything as risky

The central design choice in any risk-based AI framework is where to draw the lines. The EU Act's "high-risk" category quietly captures vast swathes of ordinary software — from CV-screening tools to credit scoring to education assessment — pulling millions of routine business applications into a heavyweight compliance regime. If Thailand replicates that scope, it will impose Brussels' costs on a market roughly 1/20th the size of the EU economy.

A better approach: limit "high-risk" to genuinely safety-critical deployments — medical diagnostics, autonomous vehicles, critical infrastructure control — and rely on existing sectoral regulators (the Bank of Thailand, the Office of Insurance Commission, the Food and Drug Administration) for domain-specific oversight. General employment, education, and consumer-facing applications should default to transparency and redress obligations, not pre-market certification.

Foundation models: don't regulate what you can't define

Thailand's draft framework, like the EU's, gestures toward special obligations for general-purpose or "foundation" AI systems. This is the most fragile part of any AI law. The technology is moving faster than the definitions. The EU's own GPAI threshold — based on cumulative compute used in training — was already controversial when adopted and is now of doubtful relevance as algorithmic efficiency improves and inference-time reasoning blurs the training/deployment line.

If Thailand insists on GPAI rules, they should focus on disclosure (training data summaries, evaluation results, known limitations) rather than mandated red-teaming or compute caps, which Bangkok cannot meaningfully verify or enforce against foreign labs. Indeed, OpenAI's recent endorsement of Illinois SB 315 — a state-level frontier AI safety bill emphasising transparency and incident reporting over prescriptive design rules — points toward a workable model: outcomes and accountability, not technology mandates.

Thailand's real AI opportunity

Thailand has genuine assets in the AI race: a sizeable digital economy, strong cloud and data-centre investment from hyperscalers, a productive partnership culture with Japanese and Korean tech firms, and a Thai-language NLP research community concentrated at NECTEC, Chulalongkorn, and KMUTT. The Eastern Economic Corridor (EEC) is positioning itself as a regional AI manufacturing and R&D hub. None of this will scale if regulation pushes compliance costs above what local firms can bear.

The Royal Decree's interim approach — registration, basic transparency, and category-based risk disclosures — should remain the backbone even after the AI Act passes. The Act itself should:

The window matters

Thailand is finalising its AI rulebook at a moment when the rest of the region is watching. Indonesia, Vietnam, and the Philippines have all signalled they may follow the leading regional model. If Bangkok chooses proportionate, outcome-based regulation, it can set a Southeast Asian template that supports innovation while addressing real harms. If it copy-pastes Brussels, it will export the EU's compliance overhead to a region that cannot absorb it. The two-track approach is a good start — provided the second track does not undo the wisdom of the first.

Sources & Citations

  1. ETDA — Electronic Transactions Development Agency, Thailand
  2. Ministry of Digital Economy and Society, Thailand
  3. EU AI Act — Official Regulation 2024/1689
  4. OpenAI backs Illinois frontier AI safety bill (MediaNama)
  5. Singapore's Model AI Governance Framework — IMDA
Share this analysis: