India artificial intelligence regulation

India's AI Labeling Rules: Mandating Transparency Without Strangling Innovation

MeitY's draft IT Rules amendment requires platforms to label synthetic content — a sensible transparency norm, if Delhi can resist the urge to overreach.

India's Layered AI Governance Stack People of Internet Research · India 5M SSMI user threshold Indian users that triggers Signifi… ₹10,372 cr IndiaAI Mission budget Approved by Union Cabinet in March… 2023 DPDP Act enacted Implementing rules are being final… ~900M Indian internet users Estimated user base affected by in… peopleofinternet.com

Key Takeaways

India is quietly building one of the world's most consequential AI governance stacks — not through a single grand statute like the EU AI Act, but through a layered patchwork of executive rules, sectoral advisories, and a state-backed compute push. The latest piece arrived in late 2025, when the Ministry of Electronics and Information Technology (MeitY) released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 requiring intermediaries — especially Significant Social Media Intermediaries (SSMIs) — to label AI-generated and synthetically generated content, including deepfakes. Consultations are continuing into 2026.

This is, on balance, a measured and welcome direction. But the details — and the temptation to bolt on adjacent demands — will decide whether India ends up with a transparency norm the world copies, or a compliance maze that punishes the open internet without solving the underlying harm.

What the draft actually does

The proposed amendments build on a series of MeitY advisories issued through 2023 and 2024 reminding platforms of their existing obligations around deepfakes and misinformation. The new draft moves beyond advisory mode by codifying specific obligations:

The draft sits alongside two other moving pieces: the Digital Personal Data Protection Rules implementing the 2023 DPDP Act, and the IndiaAI Mission — the Union Cabinet's roughly ₹10,372 crore programme approved in March 2024 to build sovereign compute capacity, fund applied research, and stand up a dedicated AI safety institute.

Why labeling, done right, is pro-innovation

Critics sometimes treat any AI regulation as a brake on the technology. That framing is too lazy. Transparency rules — when narrowly drawn — are different in kind from substantive restrictions on what models may compute or what speech they may produce. A label says "this image was made with AI." It does not say "you cannot make it."

That distinction matters because India's deepfake problem is real. High-profile non-consensual synthetic imagery cases in 2023 and 2024 involving Indian public figures prompted bipartisan outrage and the original MeitY advisories. Election-period synthetic media circulated widely during the 2024 general election cycle. A clear, machine-readable provenance signal — ideally aligned with open standards such as the C2PA Content Credentials framework — gives platforms, fact-checkers, and citizens a fighting chance without forcing the government to police the underlying generation.

It also keeps liability roughly where it belongs. India's intermediary regime, anchored by Section 79 of the IT Act and the 2021 Rules, has long depended on a safe-harbour bargain: platforms get protection in exchange for due diligence. Labeling is a due-diligence obligation, not a content-takedown mandate.

Where the draft could go wrong

There are three risks worth flagging during the consultation.

First, scope creep. Earlier MeitY advisories in March 2024 briefly attempted to require government permission before deploying "under-tested" or "unreliable" AI models in India. That requirement was walked back within days after a backlash from researchers and startups, and rightly so — prior approval regimes for general-purpose technology are the opposite of proportionate. The labeling draft should not quietly resurrect licensing through the back door.

Second, technical feasibility. Detection of AI-generated content at scale remains an unsolved problem. Watermarking is fragile; provenance metadata can be stripped. The draft must distinguish between obligations on creators and originating tools (where attestation is feasible) and obligations on downstream intermediaries (where perfect detection is not). A strict-liability rule that penalises platforms for missing every adversarially-stripped deepfake will simply over-incentivise removal of legitimate speech.

Third, parallel rule overload. The DPDP Rules, the labeling amendments, and forthcoming IndiaAI safety guidelines are all being shaped in overlapping windows. Indian startups and global platforms need an integrated compliance roadmap, not three uncoordinated rulebooks with conflicting definitions of "AI system" and "significant" intermediary.

A model for the Global South — if Delhi holds its nerve

India has a real opportunity here. Brussels has set the regulatory tone with the AI Act, but its prescriptive risk-tier model is heavy for emerging economies. Washington has substantive guidance but no comprehensive federal law. A labeling-first, transparency-centric Indian framework — paired with the IndiaAI Mission's positive agenda of compute, datasets, and safety research — could become the template much of the Global South adopts.

The win condition is narrow and achievable: a clear labeling obligation, technology-neutral and aligned with open provenance standards; a liability regime that rewards good-faith effort rather than demanding omniscience; and a clean separation between transparency rules and any future content-restriction regime, which deserves its own primary legislation and parliamentary scrutiny.

India has historically been one of the world's strongest champions of the open internet. The 2026 consultation is the moment to prove that pro-innovation governance and platform accountability are not in tension. Done well, labeling is exactly the kind of small, sharp instrument a free internet can live with.

Sources & Citations

  1. MeitY — Ministry of Electronics and Information Technology
  2. Information Technology (Intermediary Guidelines) Rules, 2021
  3. IndiaAI Mission — official portal
  4. Digital Personal Data Protection Act, 2023
  5. Reuters — India's IT Rules and deepfake advisories coverage
Share this analysis: