India's Ministry of Electronics and Information Technology (MeitY) is moving to make Indian law one of the most aggressive in the world on synthetic media. The draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, published for consultation in late October 2025, would require social media intermediaries — and the subset designated as 'Significant Social Media Intermediaries' (SSMIs) — to apply visible labels and embedded metadata to all AI-generated and synthetically created content surfaced on their platforms. For US-headquartered companies that serve India's ~900 million internet users, this is not a minor compliance update. It is a structural change to how intermediary safe harbor works under Section 79 of the IT Act, 2000.
What the draft actually requires
The proposed amendments expand the existing due-diligence obligations in Rule 3 of the IT Rules 2021 in three meaningful directions. First, intermediaries must ensure that synthetic content carries a clearly visible label (a watermark, banner, or overlay) identifying it as AI-generated. Second, the same content must carry machine-readable provenance metadata — implicitly nodding to standards like C2PA, which Adobe, Microsoft, OpenAI and others already support. Third, the definition of regulated content is broadened to include not just deepfakes of real people but also fully synthetic media and large language model (LLM) outputs that constitute 'information' under the IT Act.
MeitY frames the rules as a response to deepfake harms — non-consensual intimate imagery, election manipulation, and impersonation scams — that have intensified in India over the past two years. The political concern is genuine. But the mechanism chosen places the compliance burden squarely on intermediaries rather than on the upstream creators of synthetic media or the malicious actors who weaponize it.
Why US platforms are pushing back
Meta, X, Google/YouTube and OpenAI have, according to reports, submitted detailed comments raising both technical and legal objections. Three concerns stand out.
- Detection is not a solved problem. No watermarking technique today is robust against simple transformations — screenshotting, recompression, cropping, or running content through a second generative model. The C2PA standard relies on cryptographic signing at creation, which works for content from cooperating tools but leaves out outputs from open-source models, jailbroken systems, or adversarial pipelines. Mandating that platforms 'ensure' labels exist on content they did not create is, in practice, a mandate to deploy detection systems with double-digit false-positive and false-negative rates against billions of uploads daily.
- Safe harbor is the load-bearing wall. Section 79 has been the constitutional basis for the Indian internet economy since Shreya Singhal v. Union of India (2015), which struck down Section 66A and reaffirmed limited intermediary liability. Conditioning safe harbor on the accurate labeling of AI content — content that intermediaries by definition did not author — pushes India closer to a strict-liability model that the Supreme Court has historically resisted.
- Compliance scope is ambiguous. Does an AI-edited photograph require a label? An autocorrect suggestion? A LinkedIn post drafted with a generative tool? The current draft does not draw clean lines, and the cost of over-labeling — labeling everything to be safe — is a UX environment where labels become meaningless noise.
The US policy comparison
Washington's approach has been deliberately lighter. The Biden administration's Executive Order 14110 on AI (October 2023) called for content authentication research through NIST but stopped short of mandates; the Trump administration rescinded that order in January 2025 and has signaled preference for industry-led standards. The TAKE IT DOWN Act, signed in May 2025, criminalizes non-consensual intimate deepfakes and creates a notice-and-takedown regime — a narrower, harm-targeted intervention rather than a blanket labeling rule. State-level laws like California's AB 2655 (election deepfakes) take a similar harm-specific approach.
The contrast with India matters because US platforms must now build labeling systems for the Indian market that will likely exceed what they deploy domestically — creating a fragmented user experience and inviting reciprocal mandates from other jurisdictions. The EU AI Act's Article 50, which takes effect in August 2026, already requires providers of generative AI systems to mark outputs as machine-readable, but it places that duty primarily on the AI provider, not on every downstream intermediary.
A proportionate path forward
The policy goal — reducing deepfake harm without breaking the open internet — is the right one. The current draft, however, conflates three distinct problems: malicious impersonation (a harm-specific issue), provenance transparency (a standards issue), and platform liability (a constitutional issue). Better targeted alternatives exist:
- Adopt the EU model of locating the labeling duty primarily with AI providers under a C2PA-style technical standard, with intermediaries required only to preserve (not generate) provenance metadata.
- Carve a clear safe harbor for good-faith detection failures, mirroring the Digital Millennium Copyright Act's notice-and-takedown structure rather than imposing strict liability.
- Reserve criminal exposure for the narrow set of harms — non-consensual intimate deepfakes, election manipulation, financial fraud — where the deterrent effect is highest and the speech costs are lowest.
India will, and should, set its own digital sovereignty agenda. But a labeling mandate that cannot technically be complied with does not reduce deepfake harm; it manufactures liability and chills the very platforms that compete with the closed Chinese and Russian models of internet governance. The consultation window is the right moment to recalibrate.