When India's Ministry of Electronics and Information Technology (MeitY) published its draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 in late October 2025, the response from US-headquartered platforms was swift and pointed. Meta, Google/YouTube, X, OpenAI and Microsoft — alongside industry associations representing them — have spent the public consultation window (which ran through November 2025, with engagement continuing into 2026) pushing back on what they describe as a feasibility problem dressed up as a transparency mandate.
The proposed rules would require significant social media intermediaries (SSMIs) and AI platforms to visibly label and watermark any AI-generated or ‘synthetically modified’ content, demand user self-declaration at the point of upload, and conduct due diligence to verify whether such labels are accurate. A specific provision — the requirement that visible labels occupy at least 10% of the visual area of an image or the opening seconds of a video — has become the flashpoint.
The Proportionality Problem
India's policy intent is defensible. Synthetic media is being weaponised in political contexts globally; the 2024 Indian general election saw documented misuse of AI-generated voice clones and face-swap videos targeting candidates, and MeitY's earlier deepfake advisories of November and December 2023, followed by the controversial March 2024 ‘under-tested AI’ advisory, were direct responses to that pattern. The desire to give Indian users a clear signal that what they are watching is not real is legitimate.
The problem is not the goal — it is the means. Three design choices in the draft sit uneasily with the proportionate-regulation principle that ought to govern intermediary law:
- The scope of ‘significantly modified’ content is undefined. Does a Snapchat beautifying filter qualify? A standard auto-colour-correct in Google Photos? Generative fill in Adobe Photoshop? Without a workable threshold, the rule risks either over-labelling routine edits into meaninglessness or arbitrary enforcement.
- The 10% visual-area mandate degrades the content itself. A persistent watermark covering a tenth of the frame is not a label — it is a defacement. For legitimate creative, educational, and journalistic uses of generative tools, this turns a transparency requirement into a usability tax.
- Platform-side verification is technically unsettled. Content provenance standards like C2PA, championed by Adobe, Microsoft, the BBC and others, are still being rolled out. Asking platforms to ‘verify’ user self-declarations — in real time, at upload scale, across billions of pieces of content — is closer to mandating a technology that does not yet exist at production scale than regulating one that does.
Why US Platforms Are Treating This as Precedent
For Meta, Google, X, OpenAI and Microsoft, India is not just a large market — it is a regulatory testbed. The IT Rules, 2021 have already been the template for content-takedown obligations, grievance officer mandates, and traceability requirements that have been litigated in courts including the Bombay and Karnataka High Courts. The DPDP Act, 2023 and its draft implementing rules are similarly shaping how US firms structure consent, data localisation, and breach notification globally.
If India operationalises a 10%-area watermark mandate, expect variants to surface in Indonesia, the Philippines, Brazil's ongoing AI debate, and within the EU's AI Act implementation conversation around Article 50's transparency obligations for synthetic content. A globally fragmented set of labelling rules — each with different size thresholds, language requirements, and verification standards — is precisely the kind of compliance environment that benefits incumbents and punishes smaller AI developers and open-source projects.
A Better Path
There is a genuinely pro-innovation, pro-user version of this regulation, and it is not far from what MeitY has drafted. The reform list is short:
- Anchor the labelling obligation to provenance metadata standards (C2PA-style cryptographic signing) rather than visible watermarks of fixed size. Visible disclosure can be an option, not the only option.
- Define ‘synthetic modification’ narrowly: AI-generated likenesses of real persons, AI-generated audio of real persons' voices, and substantial scene-fabrication — not beauty filters, colour grading, or generative tools used for ornamental backgrounds.
- Shift the verification duty from ‘must verify’ to ‘reasonable efforts’, with a safe harbour for platforms that adopt recognised provenance frameworks and act on credible reports.
- Build in a sunset and review clause tied to the maturity of provenance technology, so the rule evolves with the state of the art.
None of this dilutes India's ability to act against malicious deepfakes — existing provisions of the Bharatiya Nyaya Sanhita, the IT Act's Section 66D (cheating by personation), and the platform's grievance redressal duties already cover the worst conduct. What it does is preserve the open, experimental space that generative AI needs to mature responsibly.
The Stakes
India is on track to become the world's largest market for many of the US firms named in the consultation responses. How MeitY lands these amendments — whether through narrow, technology-aware drafting or a maximalist one-size mandate — will shape not just the Indian internet but the global template for synthetic-content governance. The opportunity for the US-India tech relationship is real, but only if regulation is calibrated to what is achievable, not what is symbolically satisfying.