Global IT rules India

India's Deepfake Labeling Mandate Goes Global: How MeitY's IT Rules Amendment Reshapes AI Content Worldwide

Draft amendments to India's IT Rules 2021 require platforms to watermark synthetic media, exporting Delhi's content rules to every major AI and social platform.

India's Deepfake Labeling Rule by the Numbers People of Internet Research · Global 950M Indian internet users Largest online market globally as … 24h Grievance response window New impersonation-flag deadline, d… 5M+ Platform user threshold Significant social media intermedi… Art. 50 EU AI Act parallel rule Requires machine-readable marks on… peopleofinternet.com

Key Takeaways

On the surface, India's latest draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 — released this week by the Ministry of Electronics and Information Technology (MeitY) — reads like a measured response to a real problem. Deepfakes of public figures, fabricated political audio, and AI-generated impersonations have proliferated on Indian social platforms over the past two years, culminating in the November 2023 deepfake controversy involving actor Rashmika Mandanna that prompted then-IT Minister Ashwini Vaishnaw to publicly warn intermediaries.

Look closer, and the picture is more complicated. The draft requires social media intermediaries — defined broadly enough to capture Meta, Google, YouTube, X, OpenAI, Anthropic, Microsoft, and any platform with more than five million Indian users — to apply visible watermarks and embed cryptographic metadata tags on "synthetically generated information." Platforms must also act on user-flagged AI content within tight takedown windows or risk losing the safe harbour granted under Section 79 of the IT Act, 2000.

Because India is the world's largest internet market by users — with an estimated 950 million online as of 2024 — no global platform can afford to ignore the rule. That makes this a quietly extraterritorial regulation: a domestic content directive that will shape product engineering decisions in Menlo Park, Mountain View, and San Francisco.

What the Draft Actually Requires

The amendment introduces a new compliance layer on top of the existing Rule 3(1)(b) and Rule 4(2) obligations:

Non-compliance triggers the loss of intermediary safe harbour — the same enforcement mechanism that has made the 2021 IT Rules one of the most consequential pieces of platform regulation outside the EU.

The Case For Labeling — and Its Limits

There is a legitimate problem here. The Reuters Institute Digital News Report 2024 found that two-thirds of Indian respondents worry about distinguishing real from fake content online, the highest level among large democracies surveyed. Election cycles in 2024 saw documented AI-generated voice clones of political figures circulate at scale, including a deepfake of the late actor M. Karunanidhi resurrected for campaign purposes in Tamil Nadu.

Watermarking and provenance metadata, when done well, are genuinely useful tools. The EU's AI Act, which entered into force in August 2024, includes nearly identical provisions in Article 50 requiring deployers of generative AI to mark output as artificially generated in a machine-readable format. China's Provisions on the Administration of Deep Synthesis Internet Information Services (effective January 2023) went further with mandatory conspicuous labels. The technical groundwork — C2PA, Google's SynthID, Meta's invisible watermarks — already exists.

But the gap between sensible labeling and overreach is narrow, and India's draft sits uncomfortably close to the edge.

Three Proportionality Problems

First, the scope is too broad. The draft defines "synthetically generated information" expansively enough to capture routine creative tools — automatic background removal, voice de-noising, even AI-assisted translation. Treating an Instagram filter and a non-consensual deepfake as legally equivalent collapses a distinction that matters for both speech and trust.

Second, the technical mandate is fragile. Visible watermarks can be cropped; metadata can be stripped by the same screen-recording techniques that already defeat copyright fingerprints. Researchers at the University of Maryland and ETH Zurich have repeatedly demonstrated that current watermarking schemes — including SynthID-style approaches — can be removed or spoofed with modest effort. Mandating a defence that is reliably breakable creates a false sense of security while imposing real engineering costs.

Third, and most importantly, the enforcement mechanism is disproportionate. Tying safe harbour to compliance with rules that platforms cannot perfectly enforce — given the volume of uploads and the cat-and-mouse nature of watermark evasion — recreates the exact chilling effect the Supreme Court warned against in Shreya Singhal v. Union of India (2015). Platforms facing strict liability will over-remove, especially around politically sensitive content where the cost of erring on the side of speech is highest.

What a Better Rule Would Look Like

A proportionate version of this regulation would: (1) focus on non-consensual synthetic content depicting identifiable individuals, where the harm is clear and the legal basis solid; (2) distinguish between mandatory disclosure for high-risk categories (political content, sexual content, impersonation) and voluntary best-practice labeling for routine creative AI; (3) preserve safe harbour where platforms act in good faith, rather than imposing strict liability; and (4) align technical standards with the C2PA and EU AI Act regimes so global platforms can build once and deploy everywhere.

India's open consultation period runs through early July 2026. The platforms that submit comments will not just be defending their own engineering roadmaps — they will be shaping a template that other governments, from Brasília to Pretoria to Jakarta, are watching closely. Get this right, and India offers a model for proportionate AI accountability. Get it wrong, and the world's largest democracy becomes the test case for content regulation that scales worse than the harms it aims to address.

Sources & Citations

  1. MeitY — IT Rules 2021 (consolidated)
  2. EU AI Act — Article 50 (transparency for AI-generated content)
  3. Reuters Institute Digital News Report 2024 — India chapter
  4. Shreya Singhal v. Union of India, Supreme Court of India (2015)
  5. Coalition for Content Provenance and Authenticity (C2PA) specification
Share this analysis: