
In a defiant and detailed appearance at the Indian Express "Express Adda" event in New Delhi this week, OpenAI CEO Sam Altman launched a vigorous defense of the artificial intelligence industry's resource consumption. Facing a barrage of questions regarding the environmental footprint of large language models (LLMs), Altman sought to reframe the narrative, arguing that the energy costs of training AI are frequently misunderstood and arguably more efficient than the biological energy required to produce human intelligence.
The event, part of the broader India AI Impact Summit 2026, saw Altman addressing policymakers, tech leaders, and journalists. His comments come at a critical juncture for the AI sector, which is grappling with mounting scrutiny over the carbon footprint of data centers and the water usage associated with cooling high-performance computing clusters. Rather than conceding to the criticism, Altman went on the offensive, debunking viral statistics and dismissing alternative infrastructure proposals—specifically those from rival Elon Musk—as "ridiculous."
The core of Altman's argument in New Delhi centered on what he described as an "unfair" comparison often made by critics. He noted that detractors typically compare the massive, concentrated energy required to train a frontier model like GPT-5 against the minimal energy a human brain uses to perform a single inference task.
"People talk about how much energy it takes to train an AI model," Altman told the audience. "But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart."
Altman expanded this "biological benchmark" to include the cumulative energy cost of human evolution, suggesting that human intelligence is the product of billions of years of biological trial and error, all of which consumed vast resources. By this metric, he argued, a model that requires gigawatt-hours of electricity to train but can then serve millions of users instantly might actually have "caught up on an energy efficiency basis."
This comparison attempts to shift the window of discourse from immediate grid impact to a long-term utility calculus. However, it also sparked immediate debate among attendees and online commentators, with some, like Zoho co-founder Sridhar Vembu, pushing back against equating technological utility with human biological existence.
Beyond electricity, Altman took aim at widely circulating statistics regarding AI water consumption. Recent viral reports have claimed that a single query to a chatbot could consume up to 17 gallons of water or the energy equivalent of fully charging a smartphone multiple times.
Altman categorically rejected these figures. "This is completely untrue, totally insane, no connection to reality," he stated, visibly frustrated by the persistence of these claims.
He clarified that while older data centers relied heavily on evaporative cooling—a process that does consume significant water—the industry has largely pivoted toward closed-loop liquid cooling and other advanced thermal management systems that minimize water loss. "We used to do evaporative cooling in data centers, but now we don't do that," Altman explained. He insisted that the efficiency of modern infrastructure means that the resource cost per query is negligible compared to the sensationalist numbers often cited in environmental reports.
The conversation in New Delhi also highlighted the deepening ideological and strategic rift between Altman and SpaceX CEO Elon Musk. With land and power becoming scarce resources on Earth, Musk has publicly advocated for, and begun investing in, orbital data centers—placing server farms in space to leverage continuous solar power and natural vacuum cooling.
When asked about this concept, Altman was blunt. "Putting data centers in space with the current landscape is ridiculous," he said.
Altman broke down the economics to justify his skepticism, citing the "rough math" of launch costs relative to terrestrial power generation. He noted that even with the reduced launch costs achieved by Starship, the price per kilogram to orbit makes heavy GPU clusters economically unviable. Furthermore, he pointed to the logistical nightmare of maintenance.
"How hard is it to fix a broken GPU in space?" Altman asked rhetorically. "They do break a lot still, unfortunately. We are not there yet." He predicted that orbital data centers would not matter at scale for at least another decade, reinforcing OpenAI’s commitment to terrestrial infrastructure despite the grid challenges.
To understand the divergence between Altman’s pragmatic terrestrial approach and the futuristic orbital proposals, we can look at the comparative constraints of both models.
Table: Terrestrial vs. Orbital AI Infrastructure Feasibility
| Metric | Terrestrial Data Centers (OpenAI Strategy) | Orbital Data Centers (Musk/SpaceX Concept) |
|---|---|---|
| Primary Power Source | Nuclear, Wind, Solar Grid | Direct Solar Radiation |
| Cooling Mechanism | Liquid Cooling / Air Exchange | Radiative Cooling (Vacuum) |
| Maintenance Access | Immediate / On-site Technicians | Remote / High-Risk Robotic Repair |
| Latency | Low (Fiber Optics) | Variable (Distance Dependent) |
| Capital Expenditure | High (Construction & Grid Connection) | Extreme (Launch & Radiation Hardening) |
| Scalability Timeline | Immediate (Current Decade) | Long-term (2035+) |
If space is not the answer, Altman made it clear that the solution lies in a massive overhaul of Earth's energy grid. He acknowledged that the total aggregate energy consumption of AI is a legitimate concern, distinct from the per-query efficiency. To meet the demand of projects like the rumored "Stargate" supercomputer—a $100 billion collaboration with Microsoft—Altman advocated for a rapid acceleration of nuclear power adoption.
"We need to move towards nuclear or wind and solar very quickly," he urged. This alignment with nuclear energy is consistent with Altman's personal investments; he famously backed Oklo, a nuclear fission startup, and Helion Energy, a fusion company.
The OpenAI CEO’s vision for the future involves a symbiotic relationship where AI demand drives the capital investment necessary to modernize the energy grid, ultimately leading to cheaper and more abundant clean energy for all sectors. He rejected the idea of throttling AI progress to save energy, framing the technology as the very tool humanity needs to solve the climate crisis.
Throughout his visit, Altman lavished praise on India's tech ecosystem, describing the country's "builder energy" as unparalleled. His presence at the summit, alongside Prime Minister Narendra Modi, underscores the strategic importance of India not just as a market, but as a talent hub and potential testing ground for scalable AI solutions.
Altman noted that India’s rapid adoption of digital infrastructure places it in a unique position to leapfrog legacy systems, potentially integrating AI into public services faster than many Western nations. However, he cautioned that this growth is contingent on the availability of compute—a resource he described as the "currency of the future."
Sam Altman’s defense in New Delhi highlights a critical paradox in the AI revolution. While individual model efficiency may be improving—and may even compare favorably to biological intelligence when viewed through a specific evolutionary lens—the aggregate demand is skyrocketing.
By dismissing space-based solutions as premature and debunking water usage myths, Altman is steering the industry toward a very specific future: one tethered to the Earth, powered by nuclear fission, and justified by the immense long-term utility of artificial general intelligence (AGI). As the race for compute intensifies, the industry’s ability to deliver on these green energy promises will likely determine whether public sentiment remains permissive or turns hostile.
For now, Altman’s message to the world is clear: The cost of intelligence is high, but the cost of stagnation is higher.