India is becoming one of the world's largest live laboratories for police facial recognition (FR) — and one of the least legally constrained. From Delhi Police's AFRS, deployed during the 2019–20 Citizenship Amendment Act protests and the 2020–21 farmers' agitation, to Telangana's TSCOP and Hawk Eye apps, to Uttar Pradesh Police's drone-and-AI surveillance grid over the Maha Kumbh Mela in Prayagraj earlier this year, FR has gone from pilot to default tool of policing. The National Crime Records Bureau's National Automated Facial Recognition System (NAFRS), first tendered in 2019, is now being progressively threaded into the Crime and Criminal Tracking Network & Systems (CCTNS), giving every state police force a potential hook into a national face database.
The Internet Freedom Foundation's Project Panoptic tracker now logs more than 200 FR systems across central and state agencies. Amnesty International's Ban the Scan: Hyderabad campaign has identified that city as one of the most heavily surveilled in the world. And yet, as the Digital Personal Data Protection Act, 2023 (DPDP Act) enters its implementation phase with MeitY's draft DPDP Rules out for consultation, the single most consequential question — how the law applies to police FR — has effectively been answered: largely, it doesn't.
Section 17: The hole at the centre of the new privacy regime
The DPDP Act is, on paper, India's long-awaited answer to Justice K.S. Puttaswamy v. Union of India (2017), which recognised a fundamental right to privacy and laid down a three-part proportionality test for any state intrusion: legality, legitimate aim, and necessity. The Act introduces familiar building blocks — consent, purpose limitation, notice, data fiduciary obligations, a Data Protection Board.
But Section 17(2)(a) gives the Union government the power to exempt any 'instrumentality of the State' from most of these requirements, including for the prevention, detection, investigation or prosecution of offences. There is no statutory requirement of judicial authorisation, no independent oversight body for police data use, no purpose-limitation backstop, and no data subject right of access against law enforcement processing. Biometric data — the most sensitive category in every comparable regime, from the EU's GDPR to Brazil's LGPD — is not separately classified as sensitive personal data at all under the DPDP Act.
The practical effect: when Delhi Police runs AFRS against a protest crowd, or UP Police runs FR-equipped drones over a religious gathering of tens of millions, the DPDP Act's consent and notice architecture simply does not apply. The new regulator has no clear mandate to audit those deployments.
The courts are doing the legislature's job
With Parliament silent on FR, the work has fallen to the judiciary. In SQ Masood v. State of Telangana, the Telangana High Court is examining whether the use of Hawk Eye and TSCOP — including reported instances of officers stopping pedestrians to photograph them on the street — meets the Puttaswamy proportionality test. The petitioner's argument is straightforward and, in our view, correct: without an enabling statute, defined retention periods, accuracy audits, or redress mechanisms, indiscriminate FR fails the 'legality' prong before any necessity analysis even begins.
That is the right judicial instinct. But constitutional litigation, deployment by deployment, is not a substitute for a coherent national framework.
A pro-innovation case for guardrails
We are firmly pro-innovation and supportive of India's growing public-tech and AI ecosystem. Indian firms like Innefu Labs, Staqu, and others are building serious computer vision products, and there are legitimate, proportionate uses for FR — missing-person reunification, identification of serious-crime suspects against narrow watchlists, victim identification in trafficking cases. The NCRB's original public framing of NAFRS emphasised exactly these.
The problem is not the technology; it is the absence of rules. Unconstrained FR is bad for innovation too:
- It poisons public trust. Every news cycle of FR-at-a-protest makes it harder for the same companies to sell into healthcare, fintech, or smart-city use cases.
- It invites worst-case regulation. Jurisdictions that fail to draw lines early — see the EU's path from open deployment to the AI Act's near-ban on real-time biometric identification in public spaces — end up with sharper, less nuanced rules later.
- It distorts procurement. Without accuracy benchmarks, demographic bias testing, or independent evaluation (India has nothing equivalent to the US NIST FRVT for procurement), the cheapest vendor wins, not the most accurate one.
What proportionate regulation looks like
A workable Indian framework — one that preserves legitimate law-enforcement use without normalising mass biometric surveillance — would include:
- A standalone statute authorising police FR, not a Section 17 exemption hiding inside a general data law.
- A presumption against real-time FR in public spaces, with narrow, judicially-supervised exceptions for serious crimes.
- Mandatory accuracy and bias testing by an independent body before any system is procured.
- Statutory retention limits, audit logs accessible to a parliamentary or judicial oversight committee, and a right of redress.
- Classification of biometric data as sensitive personal data, restoring a basic GDPR-/LGPD-style floor.
The window is closing
The DPDP Rules are still in draft. Parliament can still revisit Section 17 — and should. India does not have to choose between an Orwellian default and an outright ban. But it does have to choose. Right now, by design or by drift, it has chosen the surveillance default. That is not a pro-innovation outcome; it is an under-regulated one, and the bill — in litigation, in public trust, and eventually in harsher overcorrection — will come due.