US IT rules India

India's AI Labeling Mandate Tests the Limits of Intermediary Liability — and US Platforms Are Pushing Back

MeitY's draft IT Rules amendments require visible markers and metadata on all AI-generated content, raising feasibility and safe-harbor questions for US platforms.

India's AI Labeling Mandate: Scope and Stakes People of Internet Research · US ~900M Indian internet users Approximate user base subject to t… S.79 IT Act safe harbor section Section 79 conditions intermediary… Art. 50 EU AI Act transparency rule Places labeling duty primarily on … 2025 US deepfake law signed TAKE IT DOWN Act targets non-conse… peopleofinternet.com

Key Takeaways

India's Ministry of Electronics and Information Technology (MeitY) is moving to make Indian law one of the most aggressive in the world on synthetic media. The draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, published for consultation in late October 2025, would require social media intermediaries — and the subset designated as 'Significant Social Media Intermediaries' (SSMIs) — to apply visible labels and embedded metadata to all AI-generated and synthetically created content surfaced on their platforms. For US-headquartered companies that serve India's ~900 million internet users, this is not a minor compliance update. It is a structural change to how intermediary safe harbor works under Section 79 of the IT Act, 2000.

What the draft actually requires

The proposed amendments expand the existing due-diligence obligations in Rule 3 of the IT Rules 2021 in three meaningful directions. First, intermediaries must ensure that synthetic content carries a clearly visible label (a watermark, banner, or overlay) identifying it as AI-generated. Second, the same content must carry machine-readable provenance metadata — implicitly nodding to standards like C2PA, which Adobe, Microsoft, OpenAI and others already support. Third, the definition of regulated content is broadened to include not just deepfakes of real people but also fully synthetic media and large language model (LLM) outputs that constitute 'information' under the IT Act.

MeitY frames the rules as a response to deepfake harms — non-consensual intimate imagery, election manipulation, and impersonation scams — that have intensified in India over the past two years. The political concern is genuine. But the mechanism chosen places the compliance burden squarely on intermediaries rather than on the upstream creators of synthetic media or the malicious actors who weaponize it.

Why US platforms are pushing back

Meta, X, Google/YouTube and OpenAI have, according to reports, submitted detailed comments raising both technical and legal objections. Three concerns stand out.

The US policy comparison

Washington's approach has been deliberately lighter. The Biden administration's Executive Order 14110 on AI (October 2023) called for content authentication research through NIST but stopped short of mandates; the Trump administration rescinded that order in January 2025 and has signaled preference for industry-led standards. The TAKE IT DOWN Act, signed in May 2025, criminalizes non-consensual intimate deepfakes and creates a notice-and-takedown regime — a narrower, harm-targeted intervention rather than a blanket labeling rule. State-level laws like California's AB 2655 (election deepfakes) take a similar harm-specific approach.

The contrast with India matters because US platforms must now build labeling systems for the Indian market that will likely exceed what they deploy domestically — creating a fragmented user experience and inviting reciprocal mandates from other jurisdictions. The EU AI Act's Article 50, which takes effect in August 2026, already requires providers of generative AI systems to mark outputs as machine-readable, but it places that duty primarily on the AI provider, not on every downstream intermediary.

A proportionate path forward

The policy goal — reducing deepfake harm without breaking the open internet — is the right one. The current draft, however, conflates three distinct problems: malicious impersonation (a harm-specific issue), provenance transparency (a standards issue), and platform liability (a constitutional issue). Better targeted alternatives exist:

India will, and should, set its own digital sovereignty agenda. But a labeling mandate that cannot technically be complied with does not reduce deepfake harm; it manufactures liability and chills the very platforms that compete with the closed Chinese and Russian models of internet governance. The consultation window is the right moment to recalibrate.

Sources & Citations

  1. MeitY — IT Rules 2021 (official text)
  2. Shreya Singhal v. Union of India (Supreme Court of India, 2015)
  3. EU AI Act — Article 50 transparency obligations (EUR-Lex)
  4. C2PA — Coalition for Content Provenance and Authenticity
  5. Reuters coverage — India deepfake rules debate
Share this analysis: