In March 2026, Motorola Mobility's Indian arm filed a civil defamation suit in Delhi naming Google, Meta, X, YouTube, Instagram, Facebook, and Threads as co-defendants. The American phonemaker is not just asking the courts to take down a list of more than 360 specific posts that allegedly portray its devices as unsafe — it is asking the courts to order the platforms to prevent similar content from appearing in the future, including deepfaked or AI-generated material that may not yet exist. Digital rights advocates and platform lawyers have flagged the case as a potentially significant inflection point for India's intermediary liability regime, and an early preview of the fights that will define the long-promised Digital India Act.
Why this case is different
Brand-defamation lawsuits in India are not new. What is unusual is the structure of Motorola's prayer. Instead of using the established route under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 — flagging specific URLs to grievance officers and escalating to the Grievance Appellate Committee — Motorola has gone directly to court and named the platforms themselves as defendants. As Rest of World reported, the company is also seeking what amounts to a forward-looking injunction: an order requiring intermediaries to proactively block "similar" content as and when it surfaces.
That is a meaningful escalation. Indian courts have, in the past, issued so-called "dynamic injunctions" against rogue piracy sites and, more recently, against deepfake imagery of celebrities such as Anil Kapoor and Jackie Shroff. Extending the same logic to a corporate complainant policing consumer reviews — even if some are AI-generated — would push intermediaries from a notice-and-takedown posture toward a proactive monitoring obligation. That posture sits uneasily with Section 79 of the IT Act and the Supreme Court's 2015 ruling in Shreya Singhal v. Union of India, which held that intermediaries are not required to take down content absent a court order or a government direction.
Deepfakes are a real problem — but the cure can be worse
The pro-innovation case is not that synthetic media is harmless. It plainly is not: fabricated videos of brand executives, fake "exploding battery" clips, and AI-generated impersonations of celebrities endorsing products are now a routine part of the Indian information environment. Motorola's grievance is legitimate to the extent that some of the content it flags is fabricated. The question is whether the remedy it is asking for — a standing order to police future posts — is proportionate.
There are three reasons to be cautious:
- Over-removal risk. If platforms face contempt exposure for failing to block "similar" content, the rational response is to over-filter. Genuine consumer complaints — a phone that actually overheated, a battery that swelled, an honest review — get swept up alongside fabricated material.
- Speech asymmetry. Large brands have legal teams. Individual users, small reviewers, and YouTube creators do not. A regime where the default outcome of a corporate notice is removal will tilt the deck against ordinary speech.
- Constitutional friction. A blanket pre-screening obligation runs into the Shreya Singhal line and Article 19(1)(a). Even the IT Rules 2021 stop short of mandating proactive monitoring of user-generated content; the Supreme Court has read intermediary safe harbour narrowly but has not yet endorsed prior restraint.
The same dynamic is visible in adjacent cases. On May 13, 2026, a Delhi district court directed the news portal OpIndia to temporarily remove two articles about journalist Swati Chaturvedi pending a defamation suit, and restrained it from publishing further allegedly defamatory material — an interim order, but one that effectively requires the publisher to self-police future speech about a specific subject. Whether the complainant is a journalist or a phone company, the architecture is the same: courts being asked to order forward-looking content suppression.
The Digital India Act is the real venue
India has spent the better part of three years drafting a Digital India Act to replace the IT Act, 2000. The proposed law is expected to introduce a tiered approach to intermediaries — distinguishing significant social media intermediaries, e-commerce, search, and AI providers — and to address deepfakes, non-consensual synthetic intimate imagery, and algorithmic amplification. Cases like Motorola's are precisely why that legislative process matters. It is far better to settle the question of how India treats synthetic media through a debated statute, with safeguards and clear definitions, than to have it incrementally written by trial-court injunctions in commercial disputes.
A proportionate Digital India Act would do three things. First, it would create a fast-track judicial mechanism for verified deepfakes — particularly non-consensual sexual imagery and impersonation — with strict timelines and judicial oversight, rather than handing brands a private right to compel ex ante filtering. Second, it would preserve safe harbour for genuine user speech, including critical product reviews, with clear carve-outs only for content already adjudicated unlawful. Third, it would require transparency reporting on takedown demands by both governments and private complainants — a discipline India currently lacks.
What platforms should do now
Intermediaries should contest the forward-looking element of Motorola's prayer vigorously. Conceding a duty to pre-emptively filter "similar" speech in one commercial case will set a template that every aggrieved brand, politician, and litigant will replicate. India's startup ecosystem — the very ecosystem that domestic VCs are now funding at record levels, as Rest of World reported this month — depends on an open internet where small creators, reviewers, and developers can speak without each post triggering a platform compliance review.
Deepfakes deserve a serious legal response. They do not deserve a private censorship regime built case-by-case in district courts.