India’s AI Legal Crisis: When Yesterday’s Laws Collide With Tomorrow’s Technology 

India is facing a critical legal crisis as its rapidly advancing artificial intelligence ecosystem, showcased at the 2026 AI Action Summit, operates under a grossly outdated legal framework—primarily the 26-year-old Information Technology Act of 2000—which contains no provisions for addressing AI-specific issues like liability for autonomous decisions, algorithmic transparency, data security, or the legal status of AI-generated content. This widening gap between technological capability and legal infrastructure creates profound uncertainty for citizens who lack recourse when harmed by AI systems, for businesses navigating unclear liability rules, and for national security, as there are no enforceable cybersecurity standards for AI deployed in critical sectors. While other nations have enacted comprehensive AI legislation, India continues to rely on voluntary guidelines and narrow amendments, leaving foundational questions unresolved and threatening both individual rights and the country’s credibility as a responsible global AI power.

India's AI Legal Crisis: When Yesterday's Laws Collide With Tomorrow's Technology 
India’s AI Legal Crisis: When Yesterday’s Laws Collide With Tomorrow’s Technology 

India’s AI Legal Crisis: When Yesterday’s Laws Collide With Tomorrow’s Technology 

The Paradox at the Heart of India’s AI Ambition 

The scene at Bharat Mandapam in February 2026 could not have been more impressive. World leaders, technology CEOs, and the brightest minds in artificial intelligence had gathered for the AI Action Summit, the largest such gathering ever organised on Indian soil. The message being broadcast to the world was exactly what India wanted to say: we have arrived as a serious force in the global AI landscape. 

And yet, a few kilometres away from the summit’s glitz and optimism, a different reality was playing out. In a district court in Maharashtra, a judge was trying to determine who should be held liable when a loan applicant was wrongly denied credit by an AI-powered underwriting system. The applicant, a small farmer, had been rated as “high risk” by an algorithm that no one could fully explain. The bank pointed to the software vendor. The software vendor pointed to the bank’s implementation team. And the judge, trained in contract law and the Indian Evidence Act, was left with a legal framework that had not imagined, let alone addressed, the possibility of autonomous systems making consequential decisions about people’s lives. 

This is the paradox that defines India’s AI moment. On the world stage, we project confidence and ambition. On the ground, our legal system is struggling to keep pace with a technology that is transforming every sector it touches. 

 

The IT Act 2000: A 26-Year-Old Law in a 5-Minute World 

To understand the depth of India’s AI legal crisis, you have to understand the Information Technology Act of 2000. It was passed when India’s internet user base was measured in lakhs, not crores. When the most advanced mobile phone in the country was the Nokia 3310. When “artificial intelligence” meant something from science fiction, not something that could write poetry, diagnose diseases, or drive cars. 

The Act was designed for a world where computers did what they were told, where “digital signatures” were a novelty, and where the most significant legal question about electronic records was whether they could be admitted as evidence in court. It contains exactly zero provisions addressing artificial intelligence, machine learning, algorithmic decision-making, or the legal status of content generated by AI. These terms simply do not appear in the statute. 

Yet in 2026, this is the primary legislation that courts, regulators, and litigants must stretch and interpret to resolve disputes involving AI systems. It is like using a bicycle repair manual to troubleshoot a spacecraft. 

The practical consequences are not abstract. Consider the question of liability. When a traditional software system causes harm, the chain of causation is relatively straightforward. Someone wrote faulty code. Someone else deployed it improperly. The responsible parties can be identified and held accountable. But what happens when an AI system, trained on vast datasets and capable of learning and adapting, makes a decision that causes harm? The developer may have written the underlying code but cannot predict every decision the system will make. The deployer may have implemented the system in good faith but cannot monitor every output. The user may have relied on the system’s recommendation but cannot understand how that recommendation was reached. 

Current Indian law offers no clear answer to this question. There is no statutory framework for assigning liability in cases of autonomous AI decision-making. There is no doctrine of electronic personhood that would allow an AI system itself to bear legal responsibility. There is no clear standard for when a developer’s duty of care ends and a deployer’s begins. Every dispute becomes a matter of judicial interpretation, which means inconsistent outcomes, prolonged litigation, and, for the ordinary citizen, a near-impossible path to justice. 

 

The Intermediary Mirage 

One of the most glaring examples of legal misfit concerns the classification of AI companies under the IT Act. The Act’s intermediary framework, which provides limited liability protection to platforms that host user-generated content, was designed for entities like social media companies, messaging apps, and e-commerce marketplaces. The logic was simple: if you are merely providing a platform for others to post content, you should not be held strictly liable for everything they post, provided you follow certain due diligence requirements. 

But what does “intermediary” mean when applied to a company whose core product is an AI system that generates original content? Consider a generative AI platform that creates text, images, or code in response to user prompts. Is it hosting content that users created? No, it is creating new content based on its training and the user’s input. Is it itself the author of that content? The law does not say. Is it entitled to the safe harbour protections that shield traditional platforms from liability? Courts are divided. 

This classification ambiguity has real consequences. If an AI platform is treated as an intermediary, it may escape liability for harmful outputs that its systems generate, unless it fails to comply with due diligence requirements. If it is not treated as an intermediary, it may face direct liability for everything its systems produce, a standard so strict that it could effectively prohibit the deployment of generative AI in India. Neither outcome is ideal. Both create uncertainty. And uncertainty, for businesses making investment decisions, is often worse than clear but stringent regulation. 

 

The Cybersecurity Vacuum: When ISO 27001 Is Not Enough 

Perhaps the most concerning gap in India’s legal framework concerns the intersection of AI and cybersecurity. Dozens of countries have enacted comprehensive national cybersecurity laws that establish clear obligations, minimum security standards, and accountability mechanisms. India has none of this. The closest we come is the Information Technology (Reasonable Security Practices and Procedures) Rules of 2011, which essentially tell organisations to follow ISO 27001 and call it a day. 

ISO 27001 is a perfectly respectable information security standard. But it was designed for a world of servers, databases, and networks. It does not address the specific vulnerabilities introduced by machine learning systems. It does not consider the risks of adversarial attacks, where carefully crafted inputs trick an AI system into making errors. It does not address training data poisoning, where attackers corrupt the data used to train models, creating backdoors or biases that can be exploited later. It does not consider the accountability questions that arise when an AI system, compromised by an attack, makes autonomous decisions with security consequences. 

As AI becomes embedded in critical infrastructure, this legal vacuum becomes a national security concern. Consider the following scenarios, all of which are plausible today: 

A bank’s fraud detection AI is compromised through an adversarial attack, causing it to miss obvious fraud patterns while flagging legitimate transactions as suspicious. Who is responsible for the losses? The bank, for deploying a vulnerable system? The vendor, for not anticipating the attack? The regulator, for not requiring security testing? Current law provides no clear answers. 

A hospital’s diagnostic AI is trained on data that has been subtly manipulated to produce incorrect results for certain patient populations. The errors are not discovered until patients have been misdiagnosed and mistreated. Who bears liability? Can the victims sue? Against whom? On what legal theory? 

An AI system managing power grid distribution makes a decision that causes a cascade failure and widespread blackout. The system acted autonomously based on its training and real-time data. No human made the decision. No human could have intervened in time. Under current Indian law, there may be no party clearly responsible for the resulting damage. 

These are not hypothetical concerns raised by techno-pessimists. They are the logical consequences of deploying autonomous, learning systems in critical domains without a legal framework that addresses their unique characteristics. 

 

The Self-Regulation Mirage 

Confronted with these challenges, India has leaned heavily on voluntary guidelines and self-regulatory frameworks. The underlying logic is understandable: AI is evolving rapidly, and premature regulation could stifle innovation. Better to let the industry develop its own standards and practices, intervening only when necessary. 

This approach has a certain surface appeal. But it fails to account for a basic reality of economic behaviour: compliance costs money, and organisations that are not required to incur those costs will, in many cases, choose not to. Cybersecurity measures are expensive. Transparency mechanisms require investment. Algorithmic auditing demands expertise and resources. When these are voluntary, the organisations most likely to adopt them are those already inclined toward responsible practices. The organisations most likely to cause harm, whether through negligence, cost-cutting, or outright malfeasance, face no meaningful pressure to change their behaviour. 

India’s business culture makes this problem worse. We have a long tradition of jugaad, creative improvisation and finding workarounds. This quality serves us well in many contexts, but in the context of AI governance, it means that voluntary frameworks are even less effective than they might be elsewhere. Without the binding force of law, without clear penalties for non-compliance, without effective enforcement mechanisms, compliance remains aspirational. The organisations that should be most constrained by regulation face the fewest constraints. 

The new rules announced in February 2026, requiring labelling of AI-generated content, represent a genuine step forward. They address the specific problem of deepfakes and synthetic misinformation, and they create a real incentive for compliance by tying it to statutory immunity from liability. But they fall far short of a comprehensive governance framework. They focus on labelling and platform due diligence rather than on the broader questions of accountability, security, bias, and liability. They remain grounded in an intermediary framework designed for user-generated content platforms, not for companies whose core product is an autonomous AI system. 

 

The International Context: Falling Behind 

While India deliberates, the rest of the world is acting. The European Union’s AI Act establishes a comprehensive, risk-based framework with significant penalties for non-compliance. China has regulations on generative AI and algorithmic recommendations. South Korea has its AI Basic Act. Japan has developed its approach to AI governance. Hungary and El Salvador have enacted their own legislation. 

Each of these frameworks is imperfect. Each reflects the priorities and limitations of its own legal culture. But each provides something that India currently lacks: clarity. Businesses operating in these jurisdictions know what is required of them. Courts have statutory frameworks to apply when disputes arise. Citizens have clearer pathways to justice when they are harmed by AI systems. 

The contrast with India could not be starker. Here, the foundational questions of AI governance remain unresolved. Does an AI system have any legal personality? Can it bear rights or obligations? Can it be held liable? If an AI system causes harm autonomously, and neither developer, deployer, nor user can be clearly identified as responsible, does the victim have any recourse? Under current Indian law, the answer is almost certainly no. 

The black box problem, the inability of affected parties to understand or challenge how an AI system reached a decision, has not been addressed through any transparency or explainability requirement. Affected individuals have no statutory right to an explanation of AI decisions that affect them. They have no clear mechanism to challenge those decisions. They have no way of knowing whether bias, error, or malfunction contributed to their harm. 

Copyright law does not address AI-generated works. If an AI system creates a painting, who owns the copyright? The developer who wrote the code? The user who provided the prompt? The AI system itself? The public domain? Current law provides no answer, creating uncertainty for artists, creators, and businesses alike. 

The legal status of AI agents, systems capable of taking autonomous actions in the digital world, including entering contracts, conducting transactions, and interacting with other systems, is entirely undefined. If an AI agent enters into a contract, is that contract enforceable? Who is bound by it? Who can be sued for breach? These questions are not academic. They arise every day in automated trading systems, supply chain management, and digital commerce. 

And the question of data privacy in AI training remains deeply contested. The Digital Personal Data Protection Act of 2023 provides some protections, but AI companies often treat user data as effectively public for training purposes. The tension between privacy rights and the voracious data appetite of AI systems has not been resolved. 

 

The Human Cost of Legal Uncertainty 

It is easy to discuss these issues in abstract terms, to focus on statutes, frameworks, and regulatory approaches. But beneath all of this are real human beings whose lives are affected by the gap between technology and law. 

Consider the woman whose job application was rejected by an AI screening system that, it later emerged, had been trained on historical hiring data reflecting years of gender discrimination. The system did not explicitly consider gender, but it had learned that candidates from certain educational institutions, institutions that had historically admitted fewer women, were more likely to succeed. The result was a system that perpetuated and automated discrimination. Under current Indian law, what recourse does she have? She cannot sue the AI system. She cannot easily prove that the developer intended to discriminate. The deployer may have acted in good faith. The legal framework offers no clear path to accountability. 

Consider the small business owner whose loan application was denied by an AI credit scoring system. The system gave no explanation for its decision. The bank could not explain it either, because the system’s internal reasoning was opaque even to its developers. The business owner, who had successfully operated for years, was left without credit and without recourse. The system’s decision may have been wrong, biased, or arbitrary. But without transparency requirements, without a right to explanation, without clear liability standards, there is no way to know and no way to challenge. 

Consider the patient whose medical AI misdiagnosed a condition, leading to delayed treatment and worsened outcomes. The AI system was approved by regulators who lacked the expertise to evaluate it thoroughly. The hospital deployed it in good faith. The developers designed it to the best of their ability. But somewhere in the complex chain of training data, algorithmic design, and clinical implementation, something went wrong. Under current law, who bears responsibility? Who compensates the patient? Who ensures that similar failures do not happen again? 

These are not hypothetical scenarios. They are happening now, in India, every day. And our legal system is not equipped to address them. 

 

The Path Forward: What India Needs 

The solution to this crisis is not complicated to describe, though it will be difficult to implement. India needs a dedicated AI law. It needs a national cybersecurity framework updated for the AI era. It needs a dedicated authority that consolidates governance across ministries and provides coherent regulatory oversight. And it needs to move from watching how others regulate to actively contributing to the development of AI governance norms that reflect the interests and values of the global south. 

A dedicated AI law would address the foundational questions that current law leaves unresolved. It would establish clear rules for liability when AI systems cause harm. It would define the legal status of AI systems and AI-generated content. It would set transparency and explainability requirements, giving affected individuals the right to understand and challenge decisions that affect them. It would address bias and discrimination, ensuring that AI systems do not automate and amplify historical injustices. It would establish security standards appropriate to AI systems, addressing the specific vulnerabilities that machine learning introduces. 

A national cybersecurity framework updated for the AI era would move beyond the ISO 27001 standard to address AI-specific risks. It would require security testing for AI systems deployed in critical domains. It would establish incident reporting requirements, ensuring that failures are documented and learned from. It would create accountability mechanisms when security failures cause harm. 

A dedicated AI authority would consolidate governance across ministries, providing the expertise and focus that a technology of this consequence demands. It would develop and enforce standards, conduct audits, investigate complaints, and impose penalties for non-compliance. It would serve as a central point of contact for businesses, citizens, and international partners. 

None of this means abandoning India’s commitment to innovation. On the contrary, legal clarity is one of innovation’s essential preconditions. Businesses invest more confidently when liability rules are clear. Developers build more responsibly when standards are defined. Citizens participate more willingly in an AI-powered society when they know their rights are protected. 

The countries that succeed in the AI era will not necessarily be those with the most advanced technology or the most abundant talent. They will be those that build the institutional infrastructure to govern AI effectively, to capture its benefits while managing its risks, to protect their citizens while enabling innovation. India has the talent, the ambition, and the momentum to be among those countries. But talent and ambition, without legal infrastructure, will not be enough. 

 

The Opportunity Before Us 

The AI Action Summit at Bharat Mandapam was a moment of pride for India. It demonstrated our capacity to convene, to lead, to shape global conversations about the future of technology. But summits are ephemeral. What matters is what happens after the delegates leave, after the cameras stop rolling, after the attention shifts elsewhere. 

India faces a choice. We can continue with the current approach, stretching 26-year-old laws to govern technologies their drafters could not have imagined, relying on voluntary frameworks that leave the most serious risks unaddressed, watching as other countries build the legal infrastructure for the AI age while we deliberate. Or we can act. We can build the legal frameworks that our AI ambition deserves. We can give our citizens, our businesses, and our courts the tools they need to navigate the AI era. We can contribute to the development of global AI governance norms, ensuring that the perspectives and values of the global south are represented. 

The gap between what AI can do, what it is doing, and what our legal system can address is real and growing. It carries consequences for citizens who suffer harm without recourse, for businesses that invest without clarity, for courts that adjudicate without guidance, and for India’s credibility as a responsible AI power on the world stage. Closing that gap will not be easy. It will require political will, technical expertise, and sustained effort. But the alternative, leaving the gap open, is not acceptable. 

India’s AI future is bright. But brightness, without legal infrastructure, can be blinding. The time to build that infrastructure is now.