India open source AI regulation

India's Open-Source AI Dilemma: Can MeitY Back Indigenous Models Without Strangling Them?

MeitY's coming open-source AI framework must reconcile sovereign-LLM ambitions with content-labelling rules and DPDP Act obligations that don't fit open weights.

India's Open-Source AI Push by the Numbers People of Internet Research · India ₹10,372cr IndiaAI Mission outlay Cabinet-approved 2024 programme fu… 4 Indigenous models selected Sarvam AI, Soket AI Labs' EKA, Gna… 2B Sarvam-1 parameters Open-weight Indic-language model r… Oct 2025 IT Rules draft year MeitY's draft amendments introduce… peopleofinternet.com

Key Takeaways

India is trying to do two things at once, and the friction between them will define its AI decade. On one side, the IndiaAI Mission has just placed serious public money behind indigenous foundation models — Sarvam AI, Soket AI Labs' EKA, Gnani.ai and Gan.AI were selected for sovereign LLM development with subsidised GPU compute under a ₹10,372-crore programme cleared by the Cabinet in 2024. On the other, the Ministry of Electronics and Information Technology (MeitY) is finalising a formal open-source AI policy framework even as its October 2025 draft amendments to the IT Rules push synthetic-content labelling obligations onto 'significant social media intermediaries' and the Digital Personal Data Protection (DPDP) Act, 2023 begins to bite on training data.

The unresolved question is simple to state and hard to answer: when a foundation model is released openly — weights, tokeniser, sometimes training recipe — who exactly is the 'intermediary' or 'data fiduciary' once the model has been forked, quantised, fine-tuned and embedded in a thousand downstream apps?

Open weights are not a product. They are an ecosystem.

That distinction is the one most regulatory drafts struggle with. A closed API model has a clear operator: the lab serving the inference. An open-weight model like Meta's Llama, Mistral, Sarvam-1 or Krutrim has no single chokepoint. It has a publisher, hundreds of redistributors on Hugging Face mirrors and Indian developer infrastructure, thousands of fine-tuners, and an open-ended set of integrators shipping it inside chatbots, voice agents and document tools. Applying the IT Rules' watermarking and 'unique metadata identifier' requirements uniformly across that stack would either (a) be unenforceable, or (b) force every Indian developer pulling a checkpoint from a public registry to behave as a regulated intermediary.

The DPDP Act compounds this. Section 8 makes data fiduciaries responsible for the accuracy and lawful processing of personal data. If a model trained on a web crawl memorises identifiable information, is the original publisher the fiduciary, or the start-up that fine-tunes it on Indian customer-service transcripts? Indian regulators have not yet answered, and the silence is already shaping behaviour: more than one Indian team has told us privately that they are hedging by training only on filtered, licensed corpora — at significant capability cost.

Why the open-source bet is the right one

Despite these frictions, MeitY's instinct to back open models is sound. India's competitive position in AI does not lie in matching frontier training spend with OpenAI or Anthropic; it lies in adaptation, multilinguality, and deployment at price points the rest of the Global South can use. That is a downstream game, and downstream games are won on open weights. Sarvam-1's release of a 2-billion-parameter Indic model and Krutrim's multilingual roadmap are credible because their weights are inspectable; closed analogues would never have attracted the developer mind-share they now enjoy.

Rest of World's reporting this week on India's domestic VC ecosystem — local funds now leading deals that Silicon Valley once dominated — points to the same dynamic. Indian capital is patient enough to back deep-tech, and Indian developers will build on whatever stack is cheapest and most modifiable. Regulation that treats open weights as inherently more dangerous than closed APIs would simply hand the long tail of Indian AI deployment back to foreign closed-model providers.

A proportionate framework — what MeitY should and shouldn't do

The forthcoming policy will be judged on four design choices:

India also needs to resist the temptation — visible in the Motorola defamation suit currently before the Delhi High Court, which has asked platforms to pre-empt 'similar' future posts — to import a takedown-first reflex into AI governance. Open-source AI cannot survive an injunctive regime that asks publishers to prevent uses they cannot foresee.

The window is now

India is in the rare position of being a credible third pole in AI, not because it has the most capital but because it has the developers, the languages, and a public policy willingness to subsidise compute for indigenous labs. Squandering that by retrofitting intermediary-style rules onto model weights would be a self-inflicted wound. The open-source AI framework MeitY ships in 2026 will tell us whether India means to compete — or merely to regulate the competition it could have had.

Sources & Citations

  1. IndiaAI Mission — official portal
  2. MeitY — Ministry of Electronics and IT
  3. Rest of World: India's VCs are beating Silicon Valley at home (May 2026)
  4. Rest of World: Motorola's India lawsuit and platform liability (May 2026)
  5. Digital Personal Data Protection Act, 2023 — MeitY
Share this analysis: