The Silicon Tightrope: Why India’s AI Revolution Is Racing Ahead of Its Legal Safety Net
India is facing a critical legal crisis as its rapidly advancing AI ecosystem, showcased at the 2026 AI Impact Summit, operates under the outdated Information Technology Act of 2000—a law designed for the early internet era that contains no provisions for artificial intelligence, machine learning, or algorithmic accountability. This legal vacuum creates dangerous uncertainty around liability when AI systems cause harm, leaves critical national infrastructure vulnerable due to the absence of AI-specific cybersecurity standards, and renders voluntary self-regulation ineffective, especially within India’s culture of creative regulatory workarounds. As six other jurisdictions have already enacted comprehensive AI legislation, India’s failure to move from observation to action not only jeopardizes citizen rights and business certainty but also threatens its credibility as a global AI leader, making the passage of a dedicated, modern AI law an urgent national priority.

The Silicon Tightrope: Why India’s AI Revolution Is Racing Ahead of Its Legal Safety Net
As the world’s eyes turn to New Delhi for the AI Impact Summit 2026, the atmosphere at Bharat Mandapam is one of unbridled optimism. Delegates marvel at India’s transformation into a formidable AI powerhouse. The IndiaAI Mission is pumping billions into infrastructure, a vibrant ecosystem of deep-tech startups is solving problems from agrarian distress to predictive healthcare, and the Prime Minister is positioning the nation as the bridge builder for the Global South in the age of intelligent machines.
Yet, beneath the polished floors of the summit venue, a parallel, more troubling conversation is taking place in the corridors of India’s High Courts, in the boardrooms of edgy legal tech founders, and around the tables of anxious Chief Information Security Officers (CISOs). The question haunting them is simple but profound: What law actually governs us?
The uncomfortable answer, as highlighted by Supreme Court advocate and global AI law expert Dr. Pavan Duggal, is that India is attempting to pilot a hypersonic jet using a regulatory framework designed for a bullock cart. The nation is in the throes of a legal crisis, governing one of the most consequential technologies in human history with a 26-year-old piece of legislation—the Information Technology (IT) Act of 2000.
The Ghost in the Machine: The 26-Year-Old Law
To understand the depth of the problem, one must first understand the origins of India’s digital rulebook. The IT Act was born in an era of dial-up internet, nascent e-commerce, and the Y2K bug. Its primary purpose was to facilitate electronic commerce by providing legal recognition to digital signatures and transactions. It was a forward-looking law for its time, but its vision was limited to a world where humans were the sole creators and operators of digital systems.
Today, that world no longer exists. We now live in a world where algorithms approve or deny loans, where large language models generate poetry and code, and where autonomous AI agents can potentially negotiate contracts with other AI agents without a single human keystroke.
The IT Act is silent on all of it.
It contains no definition for an algorithm, no provision for the liability of a machine learning model, and no guidance on the legal status of a deepfake created by a generative AI tool. When a citizen is wrongly denied a welfare benefit due to a biased automated decision, or when a company suffers a data breach because of a novel adversarial attack on its AI system, the courts and regulators are forced to perform legal gymnastics. They are compelled to stretch concepts designed for “intermediaries” (like ISPs) to apply to AI companies, and to fit the square peg of autonomous decision-making into the round hole of human-centric criminal law.
This isn’t just an academic inconvenience. It’s a crisis of accountability. If a self-driving car developed in Bengaluru causes an accident in a smart city zone, who goes to jail? The passenger? The developer who wrote the code? The company that trained the model on potentially flawed data? The law provides no clear answer. This legal vacuum creates a chilling effect on innovation, where fear of unpredictable liability can be as stifling as over-regulation. For citizens, it means that if an AI system harms them, their path to justice is a foggy maze with no clear exit.
The Invisible Battlefield: AI and the National Security Blind Spot
Perhaps the most alarming gap identified by Dr. Duggal is the intersection of AI and national security. In 2026, cybersecurity is no longer just about firewalls and antivirus software. It’s about protecting the integrity of the data that trains our models. It’s about defending against “adversarial attacks” where malicious actors subtly manipulate inputs to fool an AI—like tricking a facial recognition system by placing a specific sticker on a stop sign.
India’s approach to cybersecurity, however, remains anchored in the Information Technology (Reasonable Security Practices and Procedures) Rules, 2011. These rules essentially point to ISO 27001 as the gold standard. While ISO 27001 is a robust framework for traditional information security, it was never designed for the age of AI. It doesn’t account for the poisoning of training datasets, the extraction of private training data through model inversion attacks, or the autonomous propagation of a breach by a compromised AI agent.
As AI becomes the brain of critical infrastructure—managing power grids, controlling defense systems, and processing payments in the banking sector—this legal and technical blind spot becomes a strategic vulnerability. We are building the nerve center of our digital nation with a sophisticated new material, but we are using a standard-issue manual to secure it. The lack of a dedicated, legally enforceable AI security framework is not just a regulatory failing; it is an open invitation to state and non-state actors who are already exploring these new frontiers of cyber warfare.
The Mirage of Self-Regulation in a Culture of Jugaad
In the absence of hard law, the government’s instinct has been to lean on voluntary guidelines and self-regulation. The logic is sound in theory: let the industry innovate freely, and don’t hamper a nascent sector with red tape. The updated 2026 intermediary rules, mandating labelling of AI-generated content to combat deepfakes, are a step in the right direction. They signal an intent to act.
However, self-regulation has a fundamental flaw, especially within India’s unique business context. India is a land of “jugaad”—a creative, frugal, and often brilliant form of innovation that involves finding quick, work-around solutions. While this mindset has driven entrepreneurial success, it also fosters a culture of regulatory arbitrage, where rules are seen as hurdles to be cleverly bypassed rather than standards to be upheld.
In such an environment, voluntary compliance becomes a luxury for the conscientious and a competitive disadvantage for the ethical. The companies most likely to cut corners on expensive safety measures like algorithmic auditing or bias testing are often the ones creating the highest-risk systems. Without the binding force of law and the deterrent of significant penalties, self-regulation becomes a mirage. It allows us to believe we are building a responsible AI ecosystem, while in reality, we are merely hoping that no one drives off the cliff.
The Global Clock Is Ticking on India’s Leadership
India’s legal inertia is becoming increasingly difficult to ignore on the world stage. The European Union has enacted its comprehensive AI Act, creating a risk-based framework with extraterritorial reach. China has its own set of stringent regulations for algorithms and generative AI. From South Korea to Japan to El Salvador, jurisdictions are moving to create foundational AI laws. These frameworks are not perfect, but they exist. They provide certainty for businesses and a clear signal of intent for global investors.
India, by contrast, is falling into a “regulatory observation” trap. The wait-and-watch approach, intended to learn from others’ mistakes, risks ceding India’s ability to shape the global norms of AI governance. The narrative of AI is currently being written by the West and China. If India wants to champion the interests of the Global South—ensuring that AI serves the cause of development, equity, and linguistic diversity—it must act now. Hosting the summit is a powerful statement of intent, but without a domestic legal architecture to back it up, that intent risks being perceived as mere aspiration. It’s like building a world-class airport but failing to build the runways.
The Unanswered Questions of Our Time
As we navigate this complex landscape, several foundational questions remain unanswered, creating a dangerous undercurrent of uncertainty:
- Legal Personhood: If an AI agent enters into a contract that goes bad, is the contract void? Who is liable for the breach?
- Intellectual Property: If an AI creates a bestselling novel or a hit song, who owns the copyright? The user who prompted it? The developer who built it? Or does it fall into the public domain? And what about the millions of copyrighted works used to train these models—is that “fair use” or large-scale infringement?
- The Black Box Problem: When a citizen is denied a loan or flagged by a law enforcement algorithm, do they have the right to an explanation? Current law does not guarantee transparency or explainability in automated decision-making, a fundamental pillar of natural justice.
The Path Forward: From Observation to Action
Dr. Duggal’s critique is not an argument against India’s AI ambition. On the contrary, it is a plea to safeguard it. A robust legal framework is not the enemy of innovation; it is its essential prerequisite.
Businesses crave predictability. They will invest billions when they know the rules of the game. Developers can build more responsibly when they have a clear standard to meet. And citizens will embrace an AI-powered future with far greater confidence when they know their rights are protected by a modern, dedicated law.
The path forward is clear but urgent. India needs a dedicated, horizontal AI law that addresses liability, security, bias, and transparency in a holistic manner. It needs a national cybersecurity framework built for the AI era, moving beyond the outdated ISO 27001 benchmark. It needs a dedicated, empowered regulatory authority that can provide coherent oversight, moving beyond the current siloed approach across multiple ministries.
As the AI Impact Summit 2026 concludes, the delegates will leave with memories of India’s vibrant tech scene and its ambitious vision. But for those paying close attention, the real story will not be the one told on the main stage. It will be the urgent, underlying question of whether India can build the legal scaffolding fast enough to support the gleaming AI edifice it is constructing. The technology of tomorrow is already here. The time to govern it with laws of tomorrow, not yesterday, is now.
You must be logged in to post a comment.