The Delhi Disconnect: When World Leaders Met AI’s Future and Blinked
The Delhi AI summit highlighted a troubling disconnect between the urgent warnings of leading AI developers—who predict superintelligence within two years and a transformation ten times faster than the Industrial Revolution—and the vague, contradictory response from world leaders, who signed a non-binding declaration focused on “democratising” AI without any concrete mechanisms while simultaneously pursuing normal economic competition for investment and market share. Despite gathering the world’s best technical minds, the summit produced no meaningful policy progress, raising serious questions about whether governments are capable of responding to an unprecedented technological transformation before it arrives.

The Delhi Disconnect: When World Leaders Met AI’s Future and Blinked
The ballroom fell silent as Sam Altman delivered what may become the most prophetic speech of 2026. Standing before a gathering of global leaders in New Delhi, the OpenAI CEO didn’t mince words: superintelligence could arrive within two years, and when it does, it will outperform executives like him at their own jobs.
“You should be prepared,” he told the room, “for a world that looks fundamentally different.”
Yet when the week-long AI summit concluded, the official declaration read like it could have been written about any technology—cloud computing, 5G networks, or even railroad expansion in the nineteenth century. Nearly ninety countries signed on to vague commitments about “democratising AI” and ensuring “affordable access,” as if they were discussing broadband internet rather than what may be humanity’s final invention.
The gap between what the frontier labs are building and what governments are prepared to handle has never been wider—and Delhi made this painfully clear.
The Frontier Speaks: What the Lab Leaders Actually Said
While diplomats traded talking points about inclusive growth, the people actually building artificial general intelligence painted a far more urgent picture.
Demis Hassabis, CEO of Google DeepMind, offered a staggering comparison: AI would deliver “ten times the impact of the industrial revolution at ten times the speed.” The industrial revolution unfolded over generations, allowing societies to gradually adapt—labour movements formed, education systems evolved, safety nets emerged. His warning suggested that whatever transformations took two centuries to play out might now compress into twenty years.
Anthropic’s Dario Amodei pushed further, describing what his team is approaching: “a country of geniuses in a data centre.” He envisions AI systems more capable than most humans at most tasks, coordinating at speeds no biological brain can match. The metaphor matters—not a tool, not an assistant, but an entire population of superhuman intellects operating in parallel.
Altman’s most striking prediction concerned his own obsolescence. When superintelligence emerges, he suggested, it would outperform someone like him in the chief executive role. This wasn’t false modesty from a tech founder. It was an acknowledgment that the gap between human and machine capability may soon render traditional notions of leadership obsolete.
These weren’t abstract philosophical musings. They were product roadmaps.
The Declaration That Said Nothing
Against this backdrop, the Delhi declaration read as a masterclass in diplomatic avoidance.
“We recognise the need to democratise AI,” the document stated, before offering precisely zero mechanisms to achieve this goal. It referenced a “Charter for the Democratic Diffusion of AI” that, conveniently, does not yet exist—a promise of future promises that commits no one to anything.
Indian Prime Minister Narendra Modi’s rhetoric soared higher than the declaration itself. He spoke of AI as a “global common good” and called for technology that serves humanity rather than the other way around. Yet even as he spoke, his government was aggressively pitching investors on “Make in India” initiatives—because India, quite sensibly, wants its piece of the economic action.
This contradiction ran through every conversation. Countries want AI to be democratically shared and equitably distributed, just as long as they get to build their own competitive advantages first.
The Sovereignty Puzzle: America’s Offer and Its Complications
The most coherent vision came from Michael Kratsios, the White House’s science and technology director. He proposed that US partners build their AI futures on American technology—with American companies doing the building and international loans facilitating the adoption. A “Tech Corps” modelled on the Peace Corps would help integrate these systems globally.
On paper, this offers a path to AI sovereignty without requiring every nation to replicate Silicon Valley from scratch. But the offer lands in a complicated geopolitical moment. President Donald Trump’s trade wars and aggressive foreign policy have left many traditional US allies bruised and wary. Accepting American technological dominance feels different when the dominant power has spent years demanding economic concessions and questioning alliance commitments.
The democratisation framing collapses when examined closely. What Kratsios described is essentially standard commerce—countries buying American products because they’re the best available. That’s not democratisation; it’s market leadership. Nothing wrong with it, but let’s call it what it is.
The Investment Reality That Undercuts the Idealism
Meanwhile, the Indian government celebrated over $250 billion in pledged infrastructure investments. The CEOs who gathered for roundtables knew which messages would resonate. When Cisco’s Jeetu Patel momentarily forgot to mention the company’s “Make in India” commitment during his remarks, he was gently but firmly reminded to correct the record.
This is how normal economic competition works. Countries compete for investment, companies seek favourable terms, and everyone tries to capture value. But the dissonance with the summit’s loftier language was almost audible.
You cannot simultaneously treat AI as an unprecedented transformation requiring new social contracts and as a standard industrial sector where you’re trying to capture market share. Or rather, you can—and governments are—but the inconsistency reveals that the unprecedented part remains rhetorical while the competitive part drives actual decisions.
Why Governments Can’t Seem to Catch Up
The problem isn’t limited to India, nor is it primarily India’s fault. The Delhi summit reflected a global pattern: leaders acknowledge AI’s transformative potential in speeches while their policy apparatuses respond as if this were just another technology cycle.
Part of the difficulty is structural. Governments are designed for incremental change—budget cycles, election terms, regulatory processes that move at the speed of stakeholder consultation. AI development moves at the speed of compute scaling and algorithmic breakthroughs.
When Altman predicts superintelligence within two years, he’s describing a timeline shorter than most countries’ legislative processes. A bill introduced today might become law after the transformation it was meant to address has already occurred.
The mismatch isn’t just temporal. It’s cognitive. Policymakers are generalists who must juggle dozens of competing priorities. The AI researchers and CEOs they hear from are specialists who eat, sleep, and breathe this technology. The information asymmetry is enormous, and it shows.
The Unasked Questions That Linger
What if Altman is right about that two-year timeline? The question hung unasked over every session.
What if Amodei’s “country of geniuses” materialises before the next election cycle? What if Hassabis’s ten-times-faster industrial revolution displaces entire employment categories before retraining programs can be designed, let alone implemented?
The summit’s focus on equitable global distribution of AI capabilities implicitly assumes we have time to work out distribution mechanisms. But if the frontier labs’ predictions are accurate, the window for thoughtful policy design may close faster than anyone expects.
Some governments are beginning to grasp this. The European Union’s AI Act represents an attempt to get ahead of the curve, though critics argue its compliance framework will struggle to keep pace with technical change. The Biden administration’s executive orders on AI safety, now potentially facing revision under Trump, at least acknowledged the need for federal coordination.
But these remain isolated efforts, not the coordinated global response that a genuinely transformative technology demands.
What Serious Policy Would Look Like
If governments were truly treating AI as unprecedented, what would change?
First, they would acknowledge that existing economic frameworks may not suffice. Altman himself raised the possibility of a new social contract. What might that include? Portable benefits not tied to employment, since stable lifelong careers with single employers may become rarer. Wealth distribution mechanisms that capture value from highly automated industries. Education systems designed not just to train workers but to cultivate human capabilities that complement rather than compete with machines.
Second, they would accelerate their own technical capacity. Most governments lack the in-house expertise to evaluate AI claims critically. They rely on industry briefings and academic consultants rather than maintaining their own technical talent. If AI is strategically critical, governments need people who can read the code, understand the architectures, and ask informed questions.
Third, they would experiment with governance models that match AI’s development speed. Regulatory sandboxes, fast-track review processes for emerging risks, continuous adaptation rather than fixed rules—all these approaches deserve serious exploration. The current model of legislate-first, adapt-later cannot work when “later” arrives before legislation passes.
Fourth, they would engage the public honestly about what’s coming. Most people still think of AI as better search engines or helpful chatbots. They don’t know that leading labs are racing toward systems that could outperform humans across most cognitive work. That communication gap creates political space for industry to shape the narrative, but it also means populations are unprepared for changes that may soon affect them directly.
The Conversations That Happened in the Margins
The summit wasn’t devoid of serious discussion. In meeting rooms and lecture halls away from the main stage, technical experts and policy specialists wrestled with these questions. They debated compute governance, discussed international verification mechanisms for AI development, and explored how to maintain human agency in increasingly automated systems.
These conversations were excellent—informed, nuanced, and genuinely forward-looking. But they were happening among people who already understand the stakes. The question is whether their insights reached the political leaders who would need to act on them.
The evidence from Delhi suggests limited penetration. The final declaration reflected none of the urgency or specificity that characterised the expert discussions. It was as if two parallel summits occurred simultaneously—one grappling with the future, the other producing the kind of document that allows leaders to claim they’ve addressed an issue without actually addressing it.
Where We Go From Here
The next global AI summit cannot arrive soon enough, and it must be different. Leaders need to attend not as orators delivering prepared remarks but as students seeking to understand a transformation they did not create and may not control.
They should arrive with questions, not answers. What does Amodei’s “country of geniuses” mean for employment policy? How does Altman’s two-year timeline affect infrastructure planning? What does Hassabis’s ten-times-faster revolution imply for social stability?
They should also arrive prepared to act. Not to sign another non-binding declaration referencing documents that don’t exist, but to begin constructing the institutions and frameworks that a transformed world will require. Some of this work will fail—the history of technology policy is littered with well-intentioned efforts that missed their target. But failing while trying is different from failing while watching.
The Delhi summit succeeded in gathering the world’s best minds on AI. It failed in translating their insights into political response. That failure matters because the window for thoughtful preparation is closing. When superintelligence arrives—whether in two years, five years, or ten—the question won’t be whether governments were warned. They were warned in Delhi, and in countless forums before it.
The question will be whether they finally decided to listen.
You must be logged in to post a comment.