India's draft Digital Personal Data Protection (DPDP) Rules, released by the Ministry of Electronics and Information Technology (MeitY) for public consultation, attempt one of the most ambitious child-safety interventions any major democracy has tried: a blanket requirement that 'Data Fiduciaries' obtain verifiable parental consent before processing the personal data of any user under the age of 18. In practice, this turns every consumer-facing digital service operating in India — social media platforms, gaming services, ed-tech apps, streaming providers, even messaging tools — into a de facto age-verification gatekeeper for the country's roughly 250 million minors online.
The intention is unimpeachable. Children deserve robust protection from predatory data practices, manipulative design, and harmful content. But the draft framework, as written, risks importing the worst features of age-gating regimes elsewhere while doing little to actually keep children safer. A more proportionate, evidence-based design is both possible and necessary.
What the draft Rules require
The DPDP Act, 2023 defines a 'child' as anyone under 18 and prohibits processing their personal data without verifiable parental consent. It also bars tracking, behavioural monitoring, and targeted advertising directed at children. The draft Rules, published in early 2025 for consultation, operationalise these provisions by requiring Data Fiduciaries to:
- Verify that a consenting individual is, in fact, the parent or lawful guardian of the child;
- Cross-check parental identity and age using 'reliable details of identity' — which in practice points to government-issued IDs, including Aadhaar-linked data or 'virtual tokens' issued by a Digital Locker entity;
- Maintain records of consent and parent-child linkage.
The Rules carve out limited exemptions for healthcare providers, educational institutions, and child-welfare bodies, but offer no meaningful relief for the vast bulk of consumer services where children spend their time online.
Why mandatory age-gating is the wrong default
The core problem is structural. To verify that a user is over 18 — or to confirm that a 'parent' is actually a parent — platforms must collect more sensitive data about every Indian user, not less. The same Act that is meant to advance data minimisation ends up incentivising the opposite: bulk identity collection at the front door of every app. Aadhaar-based verification, even via 'virtual tokens', deepens platform dependence on a centralised identity stack that the Supreme Court itself, in Justice K.S. Puttaswamy v. Union of India (2017), warned must be deployed with proportionality and necessity.
The feasibility concerns are not theoretical. The UK's Online Safety Act and the EU's emerging age-assurance frameworks have spent years grappling with how to verify age without surveilling everyone, and there is still no consensus on a method that is simultaneously privacy-preserving, accurate, low-friction, and inclusive of users without formal ID. India's draft Rules largely sidestep these design debates.
There are also real costs to teens themselves. A blanket under-18 cut-off treats a 17-year-old preparing for university entrance exams the same as an eight-year-old — ignoring the well-established principle, recognised in the UN Convention on the Rights of the Child, that minors have evolving capacities and corresponding rights to access information, participate in civic life, and develop digital literacy. If parental consent becomes a hard prerequisite for everything from Wikipedia to coding tutorials to mental-health resources, India risks creating a generation of digitally under-equipped young adults precisely as the country tries to position itself as the world's talent engine.
Industry and civil-society pushback
The public consultation has drawn sharply critical responses from across the spectrum. Industry bodies including NASSCOM and the Internet and Mobile Association of India have warned that strict verification mandates will entrench incumbents, since only the largest platforms can absorb the compliance overhead. Smaller Indian start-ups — the very innovators MeitY says it wants to nurture — will face disproportionate costs.
Civil-society groups such as the Internet Freedom Foundation have flagged the proportionality problem and the chilling effect on legitimate teen speech. Several submissions have urged MeitY to adopt a risk-based approach: stronger obligations on services genuinely high-risk for children (gambling-adjacent gaming, dating apps, adult content) and lighter-touch obligations on general information services.
A more proportionate path forward
India does not have to choose between child safety and an open internet. A better-calibrated framework would:
- Tier obligations by risk. Reserve hard verification for genuinely age-restricted services. Allow general-purpose platforms to use proportionate age-assurance signals rather than full identity verification.
- Recognise evolving capacities. Differentiate obligations for under-13s, 13–16, and 16–18 cohorts, as the UK's Age Appropriate Design Code and similar frameworks do.
- Mandate privacy-preserving methods. Encourage zero-knowledge age tokens and on-device attestation, not bulk ID collection.
- Sunset and review. Build in a statutory review after two years, with published evidence on whether the rules actually reduce harm.
Protecting children online is a legitimate and urgent objective. But a rule that turns every Indian platform into an identity checkpoint — while pushing teens toward unregulated foreign services and dark corners of the web — would be a costly own-goal. MeitY has an opportunity, before finalising the Rules, to lead globally on a more thoughtful, proportionate, and innovation-compatible model of online child safety. India's young users, and its digital economy, deserve nothing less.