Beyond the Billions: What the India AI Summit Revealed About Technology’s Role in Humanity’s Toughest Fight

The India AI Summit exposed a stark contrast between the glittering promises of billions in investment and the gritty reality of development work happening just outside its venue, underscoring the urgent question of whether AI is being built for the world’s poorest or simply for the wealthy. Against the backdrop of a collapsing traditional aid sector, the summit showcased genuine potential—like AI-powered monsoon forecasts reaching millions of farmers—but also revealed a dangerous lack of rigorous evaluation frameworks to prove whether these tools actually improve outcomes per dollar spent. The true test, set for the 2028 summit, will be whether the technology moves beyond slogans and announcements to deliver measurable benefits for the workers painting railings outside the convention center, not just the executives inside it.

Beyond the Billions: What the India AI Summit Revealed About Technology's Role in Humanity's Toughest Fight
Beyond the Billions: What the India AI Summit Revealed About Technology’s Role in Humanity’s Toughest Fight

Beyond the Billions: What the India AI Summit Revealed About Technology’s Role in Humanity’s Toughest Fight

The image is almost too perfectly symbolic to be accidental. Inside Delhi’s gleaming Bharat Mandapam convention centre, the global tech elite gathered to announce investment pledges exceeding $200 billion. Outside, young men balanced on ladders in heavy morning traffic, hastily repainting railings along Mathura Road to impress visiting Silicon Valley executives. The summit’s central question wasn’t hiding in any keynote speech—it was written in that jarring contrast between the polished future being sold indoors and the messy, persistent present happening just beyond the glass walls.

Who, exactly, is artificial intelligence being built for? And in an era when traditional development aid is collapsing, can algorithms really help the people those railings were being painted to obscure?

The Summit’s Uncomfortable Juxtaposition

Let’s sit with that image for a moment, because it tells us something important about how technology discussions unfold in the Global South. The men repainting those railings weren’t part of any official delegation. They weren’t attending panels on digital public infrastructure or inclusive AI frameworks. They were simply working—doing the invisible labor that keeps cities functioning while the powerful arrive to discuss the future.

This isn’t a criticism of summit organizers. Every major international event involves last-minute tidying. But the timing and location created an accidental honesty that scripted sessions couldn’t match. Here was AI’s promise of transformation occurring literally feet away from the kind of manual work that technology is supposed to either replace or elevate. The separation felt less like coincidence and more like prophecy.

Prime Minister Modi stood inside and pitched India as humanity’s AI hub. UN Secretary-General António Guterres called for a $3 billion fund to help developing nations build AI capacity. India’s remarkable digital public infrastructure—which has genuinely brought 1.4 billion people into a functioning digital economy—was showcased as proof that state-backed technological deployment can work at unprecedented scale. The numbers were dizzying, the ambition intoxicating.

But outside, the morning commute continued unchanged, and the workers painting railings probably weren’t thinking about large language models or frontier AI commitments. They were thinking about finishing before the traffic got worse.

The Crumbling Foundation That Changes Everything

Here’s what made the Delhi summit different from previous tech-for-good gatherings: the timing coincided with a seismic shift in how global development gets financed. Official development assistance from wealthy nations fell 6 percent in real terms during 2024. The OECD projects the 2025 decline will be far steeper—somewhere between 9 and 17 percent. The United States has slashed its foreign aid budget by half and effectively dismantled USAID as it has functioned for decades.

The aid sector isn’t just tightening its belt. It’s hemorrhaging, with no credible timeline for recovery.

This matters because it fundamentally changes the question we should be asking about AI and development. For years, the conversation revolved around augmentation: how can artificial intelligence make aid more effective, targeting more precise, outcomes more measurable? That question assumed a baseline of robust humanitarian funding that technology could amplify.

That assumption no longer holds. The baseline is disappearing.

So we’re left with a harder question: as the traditional architecture of international assistance crumbles, can AI help fill even a fraction of what’s being lost? And more uncomfortably, are we tempted to pretend it can—to use impressive demonstrations as political cover for retreating donors?

The answer to both questions is complicated, and the Delhi summit offered glimpses of possibility alongside warnings of self-deception.

What AI Actually Does When It Works

Let me give you a concrete example that got less attention than the billion-dollar pledges but might ultimately matter more.

The AIM for Scale programme—a UAE initiative announced at COP28—sent AI-enabled monsoon forecasts via SMS to 38 million farmers across 13 Indian states last year. Not fancy apps requiring smartphones and data plans. Just text messages, delivered through infrastructure that already exists, telling smallholder farmers when to plant and when to harvest based on climate models personalized down to the village level.

This year, the programme expands to 11 countries across Africa, Asia and Latin America.

Think about what this actually means. A farmer in rural Odisha who has never used ChatGPT, who may have limited literacy, who certainly isn’t reading white papers on artificial intelligence—that farmer just received actionable information that could mean the difference between a harvest that feeds their family and one that fails. The AI worked without announcing itself. It delivered value without demanding understanding.

This is the version of AI for development that matters. Not the flashy demos, not the grandiose promises, but the quiet deployment of predictive tools that anticipate disease outbreaks before health systems become overwhelmed, that route resources more efficiently, that give vulnerable populations information they couldn’t otherwise access.

The technology is already doing things that were impossible five years ago. Climate models have reached sufficient resolution to offer village-level guidance. Natural language processing has advanced enough to handle the linguistic diversity of places like India, with its dozens of major languages and hundreds of dialects. Computing costs have fallen enough that deploying these tools at scale is financially plausible even for cash-strapped governments and NGOs.

The question isn’t whether AI can help. It’s whether we’ll deploy it thoughtfully enough, and whether we’ll resist the temptation to overclaim.

The Aid Sector’s Reckoning

Here’s an uncomfortable truth that development professionals don’t like to discuss publicly: the current aid system is inefficient in ways that technology could meaningfully address.

Today, development money flows through layers of expensive international contractors. This isn’t always because they deliver better outcomes on the ground. Often, it’s because they can shoulder the compliance, reporting and audit requirements designed to prevent fraud and corruption. Local organizations with deeper community ties and lower overheads can’t navigate the paperwork mountain. The result? USAID’s own data shows direct funding to local partners sits at just 12 percent.

AI tools that automate compliance workflows, that flag potential irregularities for human review, that generate required reports from standardized data—these could make accountability cheaper without limiting oversight. They could remove the administrative barriers that currently exclude local actors from accessing development funding.

This isn’t glamorous work. No one’s announcing $200 billion in pledges to streamline grant reporting. But if we’re serious about sustaining essential services with shrinking budgets, this is exactly where efficiency gains matter most.

The same logic applies across the development back office. Procurement, monitoring and evaluation, beneficiary verification, supply chain tracking—all of these functions consume significant resources that could be redirected toward direct services if AI tools handled the routine work. The technology exists. The question is whether donors will fund its deployment the way they fund clinics and schools, with sustained investment in training, data governance and integration.

The Accountability Gap Nobody’s Talking About

But here’s where the conversation gets harder.

We don’t actually know whether most AI-for-development tools improve outcomes per dollar spent. We have compelling anecdotes, promising pilots, enthusiastic case studies. What we largely lack is rigorous evaluation frameworks that can answer the question with confidence.

Does an AI-powered crop advisory actually increase yields enough to justify its cost? Does predictive disease surveillance catch outbreaks earlier than traditional methods, and does earlier detection translate into lives saved? Does automated compliance review reduce fraud without creating new forms of exclusion?

We’re taking these questions largely on faith. And faith is a dangerous foundation for policy when budgets are shrinking and stakes are rising.

The gap is understandable. Rigorous evaluation is expensive and slow. It doesn’t produce the kind of compelling narratives that attract funding. By the time you’ve completed a randomized controlled trial of an AI intervention, the technology has probably evolved. But without that evidence base, we’re essentially guessing about what works.

This matters more than usual because AI systems can fail in ways that are hard to detect. A language model performing poorly in a low-resource language might generate plausible-sounding but incorrect guidance that farmers follow with disastrous results. A predictive model trained on biased data might systematically under-serve certain populations while appearing to perform well overall. These failures aren’t visible without careful scrutiny.

The Delhi summit saw the launch of the New Delhi Frontier AI Commitments, an attempt to establish governance principles. That’s a start. But principles without evaluation capacity are just words on paper.

Three Groups, Three Responsibilities

If AI is going to help sustain essential services in an era of aid contraction, three distinct groups need to act differently.

Donors, philanthropies and development banks need to treat AI deployment as infrastructure, not experimentation. That means funding the unglamorous work of training, data governance, integration and maintenance—not just pilot projects with clear beginning and end dates. It means investing in the evaluation frameworks we currently lack. And it means recognizing that the digital divide is widening: AI adoption stands at 24.7 percent in wealthy countries versus 14.1 percent across the Global South, and Africa has less than 1 percent of global data center capacity for 18 percent of the world’s population. Closing that gap is a precondition for AI to help at all.

AI companies need to build serious product lines for low-resource settings. The market is large, underserved and growing. It’s also demanding: inconsistent connectivity, older devices, dozens of languages, high stakes and thin margins. Google’s demonstration of speech-to-speech translation in more than 70 languages points in the right direction. Anthropic’s announcement that it’s curating training data in 10 widely spoken Indian languages suggests genuine commitment. But “AI for Good” needs to be a business model, not a communications project. If these tools only work where infrastructure is robust and users are wealthy, they’re not the great equalizers they claim to be.

Governments across the Global South need the capacity to evaluate and govern the AI tools they adopt. This doesn’t require training their own large language models from scratch—a prohibitively expensive proposition for most. But it does require being able to scrutinize the ones they use: understanding how decisions are made, where failures occur, whether performance is equitable across populations. This is a capacity-building challenge as much as a technical one, and it’s currently underfunded relative to its importance.

The Real Test: 2028

The UAE announced it will host the next AI Impact Summit in 2028, following its co-chairing of next year’s edition in Geneva. This positions the Emirates at the center of a conversation it has helped shape—from driving AI’s inclusion at COP28 to playing a central role in the UN Sustainable Development Goals frameworks that remain critical to the Global South.

But the 2028 summit represents something more important than diplomatic positioning. It’s a deadline for moving past announcements and toward accountability.

By 2028, we should know whether the tools demonstrated in Delhi have actually delivered for the populations that need them most. We should have evaluation frameworks that tell us what works and what doesn’t. We should have deployment at scale, not just pilots. We should have answers to the question written in that contrast between the convention center and the railings: who is this technology really being built for?

The workers painting those railings will be somewhere in 2028. Maybe still painting railings. Maybe retired. Maybe doing different work entirely. The question is whether the technology discussed so grandly inside the convention center will have made their lives measurably better—not through some abstract economic transformation, but through concrete improvements in the services they can access, the information they can use, the opportunities available to their children.

Beyond the Slogan

The phrase “AI will benefit all of humanity” has become obligatory in speeches and white papers. It’s repeated so often that it’s easy to forget how extraordinary a claim it is. No previous technology has benefited all of humanity. Not electricity, not the internet, not the printing press. Each created winners and losers, transformed some lives while leaving others untouched or actively harmed.

Maybe AI will be different. The Delhi summit offered reasons for hope: genuine achievements in reaching millions of farmers with useful information, serious commitments to linguistic inclusion, recognition that governance capacity matters. But it also offered reasons for skepticism: the vast gap between announced investments and demonstrated outcomes, the persistent separation between where conversations happen and where work happens, the temptation to treat technology as a substitute for the human project of aid rather than a supplement to it.

The truth is that AI cannot replace the fundamentally human work of delivering food, water, medicine and compassionate care. The people hurt by aid cuts will not be rescued by a well-deployed language model. Anyone who suggests otherwise is selling something.

But the development sector also has to reckon with where it is now and where it might be in another two or three years if the political fight to restore funding continues to fail. In that constrained world, AI tools that extend reach, reduce costs and improve targeting aren’t optional enhancements. They’re survival mechanisms.

The question isn’t whether the technology can help. It’s whether we’ll deploy it with enough humility to acknowledge its limits, enough rigor to measure its effects, and enough commitment to equity that the people outside the convention center benefit as much as the people inside.

The 2028 summit will tell us. Until then, the image of those workers on Mathura Road should stay with everyone who cares about this question—a reminder that the future being built indoors only matters if it reaches the people painting the railings outside.