In late October 2025, India's Ministry of Electronics and Information Technology (MeitY) published draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The proposal would require every significant social media intermediary operating in India to label AI-generated and synthetically altered content — including deepfakes — with visible markers, embedded metadata, and a user declaration step at the point of upload. For US platforms that count India among their largest user bases, the compliance question is no longer hypothetical.
The amendments tie labeling directly to Section 79 of the IT Act, the safe-harbor provision that shields intermediaries from liability for user-generated content. Platforms that fail to deploy the required AI-content markers risk losing that protection — a posture that effectively converts a transparency obligation into an existential business risk. For Meta, Google/YouTube, X, and OpenAI, that is the part of the draft that demands serious attention.
What the Draft Actually Says
The draft introduces three operative requirements. First, a user uploading content must declare whether it is synthetically generated or modified. Second, platforms must apply a visible label on the content surface and embed a machine-readable marker in the file's metadata. Third, intermediaries are expected to use reasonable technical measures to detect undeclared synthetic content and to act on it. The text frames these as part of the platforms' existing 'due diligence' obligations under the 2021 Rules, which means non-compliance flows through the safe-harbor mechanism rather than triggering a separate penalty schedule.
That structure is significant. India is not creating a bespoke AI agency or a fine regime modeled on the EU AI Act. It is bolting AI-content rules onto an intermediary-liability framework built for an earlier internet. The leverage is real, but the rules are blunt.
Why This Matters for US Platforms
India has more than 800 million internet users and is the single largest market for WhatsApp, a top-three market for YouTube and Instagram, and a growing market for ChatGPT. Compliance is not optional. But compliance is also non-trivial: the labeling pipeline contemplated by MeitY assumes platforms can reliably classify synthetic media at upload time, which remains an open research problem. Content provenance standards like the Coalition for Content Provenance and Authenticity (C2PA), which Adobe, Microsoft, and OpenAI have adopted, offer part of an answer — but they only cover content created with cooperating tools. The long tail of open-source generators produces unsigned outputs that any detector will struggle with.
That gap creates a one-way ratchet. Platforms must do more than they technically can, and any high-profile deepfake that slips through becomes evidence of 'failure of due diligence' and a justification for stripping safe harbor. The risk is asymmetric: success is invisible; failure is a press conference.
The Pro-Innovation Concern
To be clear, labeling synthetic political content, non-consensual intimate imagery, and impersonation deepfakes is a worthy goal. India has had genuine incidents — including a high-profile deepfake of actress Rashmika Mandanna in late 2023 that prompted MeitY's initial advisories — and citizens deserve tools to navigate manipulated media, particularly around elections.
The trouble is proportionality. The draft applies the same labeling regime to a satirical meme made with a free filter as it does to a coordinated political deepfake. It puts the same compliance burden on a 10-employee Indian language platform as on Meta. And by tying everything to Section 79, it makes any enforcement dispute a fight over whether a platform should be able to operate in India at all — rather than a fight over a specific piece of content.
A rule that treats every synthetic image as a potential safe-harbor violation will, over time, push platforms toward over-removal and pre-publication friction. That is the opposite of an open internet.
What a Better Version Looks Like
There is a workable path. A more proportionate framework would: (i) scope mandatory labeling to high-risk categories — elections, public figures, financial scams — rather than all synthetic content; (ii) decouple labeling failures from Section 79, treating them as standalone compliance breaches with calibrated penalties; (iii) recognize industry provenance standards like C2PA as a presumed-compliant pathway; and (iv) provide a safe-harbor-within-the-safe-harbor for platforms that adopt reasonable detection measures, even when individual pieces slip through.
US platforms have a window to make this case. MeitY ran a public consultation on the draft, and the final rules have not yet been notified in the Gazette. India's Digital India Act, expected to eventually replace the IT Act in full, is the larger horizon — and the labeling rules will set the template for how AI obligations are written into that successor statute.
The Bigger Picture
India's draft is part of a global wave. The EU AI Act's Article 50 requires similar labeling of synthetic content from August 2026, and California's AB 2655 took effect in 2025. The risk is not that any one of these regimes is unreasonable — it is that platforms will face a patchwork of incompatible labeling schemes, each tied to a different liability hook. For an open internet to remain genuinely global, regulators and platforms need to converge on interoperable standards rather than each jurisdiction inventing its own marker, its own metadata field, and its own penalty for non-compliance.
India has an opportunity to lead that convergence. The current draft, in its current form, doesn't quite get there. The next six months will decide whether it does.