Beyond the Delhi Declaration: The Unspoken Battle for AI’s Future
Eighty-six countries endorsed the non-binding New Delhi Declaration on artificial intelligence, establishing shared principles for inclusive, human-centric AI development focused on expanding access for developing nations and ensuring transparency and accountability. However, beneath this diplomatic consensus, the summit revealed deep and unresolved tensions, with the United States explicitly rejecting global AI governance in favor of national sovereignty and strategic trade partnerships, while smaller nations and developing countries emphasized that meaningful AI access requires not just declarations but actual infrastructure, financing, and technology transfer to address the unprecedented concentration of computational power and resources in the hands of a few corporations and economies.

When 86 Countries Agree—But Can’t Agree on What Matters Most
The photographs from the India AI Impact Summit told a story of unity. World leaders crowded the stage in New Delhi, smiles frozen for the cameras, hands clasped in the universal gesture of diplomatic triumph. Behind them, banners proclaimed the “New Delhi Declaration” — a document endorsed by 86 countries and two international organizations, all pledging to build artificial intelligence that serves humanity rather than threatens it.
But walk ten steps away from that stage, into the corridors where delegates spoke without microphones, and a different picture emerged.
“This is a declaration of intentions, not a treaty of obligations,” one European negotiator confided over coffee on the summit’s final day. “We’ve all agreed that AI should be responsible and inclusive. The question nobody can answer yet is: responsible to whom, and inclusive of what?”
The India AI Impact Summit, which ran from February 16 to 21, 2026, achieved something genuinely remarkable: it brought together AI superpowers and tiny island nations, technology CEOs and civil society advocates, all in one sprawling conversation about where artificial intelligence is taking humanity. The resulting Delhi Declaration commits signatories to expanding AI benefits for developing economies, strengthening public-interest applications in healthcare and education, and promoting transparency in algorithmic systems.
Yet beneath the carefully crafted language lies a terrain of deep disagreement — about who controls the infrastructure AI runs on, whether global governance should exist at all, and whether sovereignty in the 21st century means something fundamentally different than it did before.
The Two Indias in the Room
Indian Minister Ashwini Vaishnaw framed the declaration as proof of “broad global support” for a human-centric AI vision. And indeed, India played its diplomatic role masterfully, positioning itself as the bridge between the Global North and South, between the regulators and the innovators.
But India itself embodies the tensions the declaration seeks to resolve.
On one hand, India’s digital public infrastructure — the India Stack of digital identity, payments, and data systems — stands as perhaps the world’s most ambitious experiment in technology-led development. Hundreds of millions of Indians now access financial services, government benefits, and healthcare through systems that simply didn’t exist a decade ago. AI promises to extend this revolution further.
On the other hand, India remains profoundly dependent on AI infrastructure it does not control. The chips, the cloud capacity, the foundational models — these come from elsewhere. When the United States restricts semiconductor exports to certain nations, when cloud providers adjust their pricing or terms of service, when major AI companies decide which languages their models will support, India feels those decisions acutely.
This is the paradox facing not just India but virtually every nation outside a small club of AI developers. You can build applications on someone else’s platform, but you cannot build sovereignty.
The Mauritius Problem: Small Economies and Large Ambitions
Navin Ramgoolam, the Prime Minister of Mauritius, brought this dilemma into sharp focus during one of the summit’s most honest sessions. Small economies, he explained, don’t have the luxury of pretending they can go it alone.
“We do not enjoy the same financing tools as major economies,” Ramgoolam told delegates. “No concessionary loans, no subsidies for research and development at scale. Without external partnerships, we simply do not have the capacity to invest in the R&D that is required.”
His words carried weight not because they were new — everyone in the room knew this — but because they articulated something the declaration’s lofty language tends to obscure: access without capacity is just another form of dependency.
The Vice-President of Seychelles, Sebastien Pillay, expanded on this theme. Small states may lack oil or minerals, he acknowledged, but they possess human capital. They want AI to strengthen government efficiency, diversify economies beyond tourism and fisheries, and protect food security and biosecurity. These aren’t abstract aspirations — they’re survival requirements for nations whose physical and economic vulnerabilities compound daily.
But Pillay’s prescription cut deeper than most diplomats are comfortable with. Realizing these ambitions, he argued, requires “sustained technology transfer and legal readiness, not just diplomatic language.” In other words: stop sending us declarations and start sending us engineers, training programs, and actual computing capacity.
The Delhi Declaration nods toward these needs. Whether it will produce them is a different question entirely.
The American Exception: Why Washington Said No to Global Governance
Perhaps the most revealing moment of the summit came when Michael Kratsios, Director of the White House Office of Science and Technology Policy, took the stage. His message was polite, professional, and unmistakably clear: the United States will not cede control of AI to any global body.
Kratsios cautioned against framing AI as a binary between “haves and have-nots,” arguing this perspective misses the point. The real task, he suggested, is enabling governments to deploy the “best AI technology” strategically — and for the United States, “best” means American.
His resistance to centralized global oversight was categorical. Instead, he emphasized “sovereign AI capability” and explicitly rejected the notion of global AI governance. AI should advance through trade and partnership, he argued, not supranational regulatory structures. American AI, he declared, is “open for business” and would prioritize “trade over aid.”
The implications rippled through the conference center. Here was the world’s dominant AI power essentially saying: we’ll work with you bilaterally, we’ll sell you our technology, we’ll partner on terms we both agree to — but we won’t submit to global rules that might constrain our innovation or competitive advantage.
This isn’t merely a policy preference; it’s a structural reality. The companies building the most advanced AI systems — OpenAI, Google, Microsoft, Anthropic — are American. The semiconductor supply chain, while globally distributed, remains heavily concentrated in U.S.-allied economies. The venture capital funding AI development flows predominantly from American sources. Any global governance framework that didn’t accommodate these realities would be, at best, aspirational.
But accommodation cuts both ways. By rejecting global governance outright, the United States also forecloses the possibility of shaping it. And as other centers of AI development emerge — China obviously, but also Europe’s regulatory heft, and potentially India’s scale — the question of whether the 2030s will see a fragmented AI landscape or some form of managed interoperability remains very much open.
The Serbian Challenge: Is Sovereignty Even Possible Anymore?
Serbian President Aleksandar Vučić sharpened the debate further by asking a question few wanted to confront directly: what does sovereignty mean when technology is this concentrated?
Invoking Einstein’s warning that technology can outpace wisdom, Vučić wondered whether political systems can keep pace with AI’s acceleration. He warned of an “unprecedented concentration of technological power” and posed a blunt question that hung in the air long after he left the stage: will a small number of actors set the rules for everyone else?
“Sovereignty in the 21st century,” he argued, includes the ability to control data, regulate algorithms, and develop domestic expertise. Without that capacity, sovereignty risks becoming merely formal — flags and anthems and UN seats, but no actual power to shape citizens’ lives in an era when algorithms increasingly determine who gets loans, who sees job advertisements, who receives medical diagnoses, and who encounters law enforcement.
This analysis cuts against both the American emphasis on national capability and the multilateral dream of global cooperation. If Vučić is right, then even relatively capable middle powers like Serbia face an uphill battle maintaining meaningful sovereignty. The technological stack runs too deep, the network effects are too powerful, the capital requirements too immense.
Slovakian President Peter Pellegrini reinforced this point by focusing on infrastructure. “Computing power is the new infrastructure,” he declared. “AI must not stay in the hands of few.”
Slovakia has invested in domestic supercomputing and can draw on low-carbon energy to power data centers. But even this positions the country as a node in networks ultimately controlled elsewhere. Democratization, Pellegrini argued, requires real access to skills, tools, and fair conditions for innovation — not just the ability to plug into systems designed by and for others.
What the Tech Giants Actually Said
The presence of AI’s corporate leadership — Sam Altman of OpenAI, Sundar Pichai of Alphabet/Google, Demis Hassabis of Google DeepMind, Brad Smith of Microsoft, Dario Amodei of Anthropic — added another layer of complexity to the proceedings.
These are not neutral actors in the AI story. Their companies control the platforms, the models, and increasingly the computing infrastructure that nations must access to participate in the AI economy. Their presence in New Delhi signaled recognition that the political landscape around AI is shifting, and that engagement with governments — many governments — is no longer optional.
Yet their messages were carefully calibrated. They spoke of partnership, of openness, of shared responsibility. They endorsed the declaration’s principles. They committed to working with governments on safety and transparency.
What they did not do is promise any fundamental change in how the AI economy is structured. The concentration of compute, talent, and capital that so concerns smaller nations is not an accident or a fixable bug — it’s the natural outcome of an industry where returns scale with size. Companies that have spent billions building AI capabilities are not about to redistribute those capabilities on terms that undermine their competitive position.
This doesn’t make them villains. It makes them companies. But it does mean that declarations of principle, however sincerely endorsed, operate within constraints that no document can dissolve.
The Infrastructure Question Nobody Wants to Answer
Throughout the summit, one topic kept surfacing despite diplomats’ best efforts to steer conversations elsewhere: the physical infrastructure of AI.
AI requires energy-intensive data centers. It requires advanced graphics processing units (GPUs) that remain in short supply globally. It requires stable jurisdictional frameworks that investors can rely on. It requires, in short, things that most countries simply do not have and cannot easily acquire.
When Pellegrini called computing power “the new infrastructure,” he was making an observation that extends far beyond Slovakia. The countries that control compute clusters will control AI development. The countries that don’t will be consumers, not creators — regardless of what any declaration says.
This is not a new problem. The same dynamic played out with industrial machinery in the 19th century, with mainframes in the 20th, with cloud computing in the early 21st. But AI’s centrality to everything from military capability to economic competitiveness to social organization makes the stakes higher than ever before.
The Delhi Declaration acknowledges this indirectly, calling for expanded access and technology transfer. But it does not, because it cannot, specify how such access would work in practice. Would leading AI companies be required to make their models available on preferential terms to developing countries? Would GPU manufacturers prioritize shipments to nations without domestic production capacity? Would cloud providers offer subsidized computing time for public-interest applications?
These questions have no easy answers. They also have no place in a non-binding declaration that must accommodate everyone from California to Cameroon.
What the Declaration Actually Does
It would be unfair to judge the Delhi Declaration against standards it never claimed to meet. This is not a treaty. It creates no enforcement mechanisms, no binding obligations, no penalties for non-compliance. It is, explicitly and intentionally, a statement of principles.
As such, it serves several important functions.
First, it establishes a common language for discussing AI governance across vastly different political systems and development levels. When Indian officials talk about “inclusive AI” and European regulators discuss “human-centric AI” and American policymakers emphasize “responsible AI,” they are now operating within a shared semantic framework. This matters more than it might seem — consistent terminology is the precondition for consistent policy.
Second, it elevates certain concerns to global prominence. Algorithmic bias, cybersecurity risks, workforce disruption, societal impact — these are now formally recognized as challenges requiring collective attention. Countries that might have ignored these issues can now be reminded of their stated commitments.
Third, it creates political cover for domestic action. Officials who want to regulate AI, invest in public-interest applications, or require transparency from algorithmic systems can point to international consensus as justification. The declaration strengthens their hands against domestic opposition.
Fourth, it keeps the conversation going. The document calls for future meetings, continued dialogue, and eventual follow-up. In diplomacy, process often matters as much as substance — and the process of 86 countries talking about AI governance is itself a significant achievement.
But these functions, while real, operate within strict limits. The declaration does not confront the concentration of compute. It does not reconcile the fundamental tension between national sovereignty and global cooperation. It does not specify who pays for the access it advocates, or how, or on what timeline.
Balaraman Ravindran at the Centre for Responsible AI at IIT Madras, who chaired one of the summit’s working groups, put it honestly: “While operational details are thin, as is expected from a multilateral declaration, the agreement has consensus from so many countries, including the two AI superpowers.”
That consensus is real. Its limits are equally real.
The Global South’s Quiet Demands
Throughout the summit, representatives from developing countries made arguments that rarely appeared in official summaries but shaped the private conversations substantially.
They argued that AI governance discussions cannot be divorced from historical context. The same countries that benefited from colonialism, that shaped the global economic system to their advantage, that control international financial institutions — these countries now want to set rules for AI that would bind everyone else.
This perspective doesn’t reject cooperation. It insists that cooperation must acknowledge power imbalances rather than pretend they don’t exist.
When the Prime Minister of Mauritius notes that small economies lack financing tools, he’s not just making a technical observation. He’s pointing out that the rules of the global economy were written by and for larger players. When the Vice-President of Seychelles demands technology transfer, not just diplomatic language, he’s insisting that declarations of principle must translate into material reality.
The Delhi Declaration nods toward these demands. Whether it satisfies them depends entirely on implementation — and implementation lies outside the document’s scope.
Where We Go From Here
The India AI Impact Summit ended with handshakes and press releases. Delegates flew home with copies of the declaration in their briefcases and questions in their minds.
What happens next?
The declaration will guide national policies in signatory countries, at least to the extent consistent with domestic priorities. It will inform cross-border research collaborations. It may influence standard-setting processes in technical bodies. It provides a reference point for future negotiations.
But the deeper questions — about who controls infrastructure, whether global governance should exist, what sovereignty means in the AI age — these will be resolved not in declarations but in the actual evolution of technology and power.
The United States will continue developing AI on its own terms, partnering with allies but resisting constraints. China will pursue its path, integrating AI with its distinctive political and economic model. Europe will regulate, shaping global standards through market power. India will seek to translate scale into influence. Smaller nations will navigate among these giants, seeking maximum autonomy within minimum options.
Meanwhile, the technology itself will evolve in ways nobody fully predicts. New capabilities will emerge. New risks will materialize. New actors will enter the field. The ground beneath today’s debates will shift, as it always does.
The Delhi Declaration matters because it captures a moment — a moment when 86 countries agreed that AI should serve humanity, that access should expand, that risks should be managed. Those are not trivial agreements. They represent genuine progress in building shared understanding across profound differences.
But they are also, in the end, just words. The work of turning words into reality lies ahead, in negotiations and investments and institutional designs that will occupy policymakers for years to come.
Einstein, as Vučić reminded the summit, warned that technology can outpace wisdom. The Delhi Declaration is an attempt to catch up — to bring political wisdom to bear on technological acceleration. Whether it succeeds depends less on the document itself than on what its signatories do next.
The cameras have stopped flashing. The leaders have returned home. The real work begins now.
You must be logged in to post a comment.