India is in the middle of writing the rules that will shape its artificial intelligence economy for a generation. After two years of consultation papers, ministerial advisories, and a high-profile expert subcommittee report, the Ministry of Electronics and Information Technology (MeitY) is finalising what officials describe as a 'risk-calibrated' AI governance framework — likely to be operationalised through the long-pending Digital India Act and supplementary rules under the Digital Personal Data Protection (DPDP) Act, 2023.
The direction of travel matters far beyond New Delhi. India hosts the world's second-largest internet user base, the largest IT-services workforce, and — through the IndiaAI Mission's ₹10,372 crore (≈$1.25 billion) commitment — one of the most ambitious public compute build-outs outside the United States and China. Get the rules right, and India can become the default destination for affordable, multilingual, frontier-adjacent AI development. Get them wrong, and the country risks repeating the regulatory missteps that have already chilled AI investment in parts of Europe.
From the March 2024 Advisory to a More Mature Posture
The defining moment in India's recent AI policy was MeitY's March 1, 2024 advisory, which initially required intermediaries to obtain government 'permission' before deploying 'under-tested or unreliable' generative AI models. The backlash from start-ups, venture capitalists, and global researchers was immediate and largely justified: a licensing regime for software releases would have placed India closer to Beijing's generative AI rules than to any liberal-democratic peer.
To MeitY's credit, the advisory was substantively revised within two weeks, dropping the permission requirement and narrowing its scope. The episode taught a useful lesson: AI policy made by executive instrument, without parliamentary deliberation or impact assessment, produces fragile and counterproductive rules. The January 2025 'Report on AI Governance Guidelines Development' by the MeitY-appointed subcommittee, chaired by Principal Scientific Adviser Ajay Kumar Sood, internalised that lesson — recommending a 'whole-of-government' coordination mechanism rather than a single AI regulator, and explicitly endorsing 'techno-legal' interventions over hard prohibitions.
What a Proportionate Indian Framework Should Look Like
A well-designed Indian AI regime should rest on four pillars, none of which require new prescriptive licensing.
1. Build on the DPDP Act, don't duplicate it
The DPDP Act, notified in 2023 with draft rules released for consultation in January 2025, already gives the Data Protection Board jurisdiction over algorithmic processing of personal data — including profiling, automated decision-making, and training-data governance. Layering an additional AI-specific consent regime on top would create overlapping enforcement and compliance fatigue, particularly for the 1.4 lakh+ DPIIT-recognised start-ups that now sit at the heart of India's AI ecosystem.
2. Sectoral regulators, not a super-regulator
The Reserve Bank of India already supervises AI in credit scoring; SEBI oversees algorithmic trading; the Central Drugs Standard Control Organisation evaluates AI-enabled medical devices; and the Telecom Regulatory Authority of India has issued recommendations on AI in telecom. These regulators understand the risk surfaces in their sectors far better than a generalist AI authority ever could. The government's own 2025 subcommittee report rightly recommends empowering them rather than displacing them.
3. Liability that follows harm, not capability
The EU AI Act's 'high-risk' classification has already pushed several open-source model developers to delay European releases. India should avoid the same trap. Liability should attach to deployers who cause concrete harm — discriminatory lending, defamatory deepfakes, unsafe medical recommendations — not to upstream model developers based on parameter counts or compute thresholds. The Bombay High Court's 2024 ruling in Kunal Kamra v. Union of India, which struck down the IT Rules' fact-check unit amendment, reaffirmed that vague, content-focused mandates on intermediaries fail constitutional scrutiny under Article 19(1)(a).
4. Mandatory transparency, voluntary safety testing
Disclosure obligations — synthetic media labelling, model cards for systems used in public services, incident reporting for serious harms — are low-cost and high-value. They are also constitutionally robust. By contrast, mandatory pre-deployment red-teaming of all foundation models, as some commentators have proposed, would impose costs only well-funded incumbents can absorb, entrenching the very Big Tech dominance Indian policymakers regularly criticise.
The Stakes: A $17 Billion Market and India's Soft Power
NASSCOM and BCG project India's AI market to reach $17 billion by 2027, growing at a 25–35% CAGR. The IndiaAI Mission's GPU procurement — over 18,000 high-end accelerators tendered in 2024–25 — is already attracting researchers from Bengaluru to Toronto. Public-interest models like Bhashini, AI4Bharat, and the forthcoming sovereign foundation model for Indian languages depend on a regulatory climate that treats experimentation as a feature, not a risk.
India's comparative advantage is not capital or compute — it is the ability to build affordable, multilingual AI for the global majority. That advantage evaporates the moment compliance overhead exceeds engineering overhead.
There is also a geopolitical argument. As Washington and Brussels diverge on AI rules, and as the Hiroshima Process and Bletchley Declaration commitments translate unevenly into domestic law, India has a rare window to set a credible 'third way' standard for the Global South. A proportionate, evidence-based, rights-respecting Indian framework — anchored in the DPDP Act, sectoral expertise, and constitutional speech protections — would be far more influential abroad than a copy-paste of either the EU AI Act or China's algorithm registry.
The Risk of Overcorrection
The pressure to 'do something' about deepfakes, election misinformation, and AI-enabled fraud is real and legitimate. But India has been here before. The 2021 IT Rules, drafted in similar urgency, are still being litigated five years on. AI legislation drafted in the same reactive register would lock in approaches that look prudent in 2026 and obsolete by 2028. The better path — and the one the subcommittee report quietly endorses — is a narrow, amendable framework that punishes harms, mandates transparency, and otherwise lets India's developers build.