
"It took the very widespread evolution of the 100 billion people that have ever lived to produce you."
Sam Altman said this at the India AI Impact Summit in late February 2026, in a video interview with Anant Goenka of The Indian Express. He was making an argument about AI energy efficiency. The audience was investors, policymakers, and heads of state. The framing was: when you ask whether AI consumes too much energy, you are asking the wrong comparison question.
It is the most elaborate deflection I have seen from a tech executive since "move fast and break things" became a liability.
What He Was Actually Responding To
The proximate cause was a question about water consumption. Altman was dismissive: "Claims that ChatGPT uses something like 17 gallons of water for each query or whatever — completely untrue, totally insane, no connection to reality."
He was right that the number is wrong. The 17-gallon figure circulated on social media and appears nowhere in the peer-reviewed literature. The actual figure comes from Shaolei Ren's lab at UC Riverside — Li, Yang, Islam, and Ren (2023), published in the Communications of the ACM in March 2025. Their estimate: approximately 500 milliliters per 10 to 50 queries for inference, Scope 1 and 2 combined. About one shot glass per conversation.
But the paper's more consequential finding appears elsewhere in the document: a projection of 4.2 to 6.6 billion cubic metres of water withdrawal from AI globally by 2027.
Altman debunked a viral misstatement. The peer-reviewed projection went unaddressed.
He then moved to energy, and this is where the evolutionary accounting began.
The Comparison He Chose
The argument, reconstructed from multiple corroborated accounts across CNBC, Fortune, The Register, and TechCrunch, runs as follows:
People compare how much energy it takes to train an AI model against how much energy it costs a human to answer a single query. That is unfair, Altman argued, because it front-loads training costs against per-inference costs. His preferred comparison: once the model is trained, how much energy does ChatGPT use per query versus a human answering the same question?
On that narrow axis, he claimed AI "has probably already caught up on an energy efficiency basis." His specific figure: 0.34 watt-hours per query, drawn from a blog post he published in June 2025.
This is true in a specific, constrained sense. The human brain uses roughly 20 watts continuously, so a 30-second factual lookup costs about 0.17 watt-hours of cognition — comparable to Altman's 0.34 watt-hours for a ChatGPT query, before you account for data center overhead that does not map cleanly to metabolic consumption. For short factual Q&A, the numbers sit in the same order of magnitude. Altman chose the comparison window where his product looks most favorable and did not disclose that he had chosen it.
Then the argument escalated. "It also takes a lot of energy to train a human," he said. "It takes like 20 years of life, and all the food you eat before that time, before you get smart." And then: "It took the very widespread evolution of the 100 billion people that have ever lived to produce you."

Three Places Where the Accounting Breaks
The argument fails in three specific ways, and they fail differently.
The first failure is categorical. Human metabolic energy sustains life, reproduction, cognitive development, and the maintenance of civilization. It is not analogous to training compute in any methodological sense. When you eat food for 20 years, you are not pre-computing answers to future queries — you are keeping a living system functional across its entire surface area of existence. Treating that as "training energy" requires ignoring every function of biological existence except the one that makes the comparison convenient.
The second failure is about window selection. Altman's per-query efficiency claim holds for short factual lookups. It does not hold for sustained knowledge work, which is the actual use case justifying AI infrastructure investment. A human expert working for an hour on a complex problem spends 10–20 watt-hours of cognition. Equivalent AI sessions consume between 100x and 225,000x more energy per hour of comparable output — estimates vary by methodology and task, but none of them favor the AI side by the ratio Altman's framing implies. He chose the narrowest favorable window and presented it as the fair comparison.
The third failure is temporal. The International Energy Agency's April 2025 report documented data center electricity consumption at approximately 415 terawatt-hours in 2024 — about 1.5% of global electricity, growing at 15% per year, projected to reach 945 TWh by 2030. AI-accelerated server growth runs at 30% annually, four times faster than any other sector on the grid. The United States hosts 45% of this compute.
Human metabolic energy costs accumulated over 200,000 years and sustain 8 billion living people simultaneously. AI infrastructure energy costs have accumulated over four years of commercial deployment and are growing faster than any other sector. These time scales are not comparable. Treating evolutionary energy costs as the benchmark for AI infrastructure costs is not just a category error — it is a category error chosen to make a four-year growth curve disappear against geological time.
| Comparison axis | Human cost | AI cost | Altman's framing |
|---|---|---|---|
| Per factual query | ~0.17 Wh | ~0.34 Wh | "AI has caught up" |
| Per hour of knowledge work | 10–20 Wh | 1,000–4,500+ Wh | Not mentioned |
| Training + embodied cost | 200,000 years of evolution | 4 years, 415 TWh (2024) | "Same category" |
| Projected 2030 sector growth | Flat | +128% (945 TWh) | Not mentioned |
The row Altman did not address: the aggregate growth rate. A point-in-time per-query efficiency comparison says nothing about what happens to the total when the query volume scales 30% per year.
What Vembu Heard
Sridhar Vembu, who co-founded Zoho and has built enterprise software for three decades without venture capital, responded on X. He did not dispute the energy numbers. He objected to the premise.
"I do not want to see a world where we equate a piece of technology to a human being," Vembu wrote. "I work hard as a technologist to see a world where we don't allow technology to dominate our lives, instead it should quietly recede into the background."
This is a philosophical objection, not an empirical one, and it reaches something the energy math cannot. When you ask whether AI has "caught up to humans on an energy efficiency basis," you have already decided that human cognition and AI inference are the same kind of thing, priced in the same currency, measured against the same purpose. The comparison grants a premise before it begins: that a person eating breakfast and a GPU cluster processing tokens are both, fundamentally, systems converting energy into intelligence, differing only in watt-hour efficiency.
Vembu's pushback was that this premise should not be granted before the arithmetic starts.
The Justification That Scales
The evolutionary accounting argument did not emerge from curiosity about comparative biology. It emerged in a context where OpenAI has discussed building data centers in the range of hundreds of billions of dollars, where the US government is allocating public land and expedited permits for AI infrastructure, and where electricity utilities are revising 20-year demand projections upward because of AI load.
When you are about to build infrastructure at that scale, you need a justification that survives at that scale. "Our per-query energy cost is 0.34 watt-hours" is a defensible number that does not survive the follow-up: how many queries per day, multiplied across the projected growth rate, and what does that aggregate to in 2030?
The evolutionary accounting argument pre-empts the follow-up by making the comparison window so large that no specific aggregate is visible inside it. Two hundred thousand years of human evolution absorbs any number. One hundred billion people absorbs any projection. If you are measuring against geological time, no four-year growth curve looks alarming.
This is not an argument about energy efficiency. It is an argument designed to make energy criticism feel like a failure of perspective — like complaining that the ocean is wet. And it was delivered at a summit in front of the people who will decide whether to build the next generation of AI infrastructure on public land with subsidized power.

The argument will reappear. Every time the IEA projection gets cited in a policy hearing, every time a utility announces it is deferring residential load to serve a data center campus, every time a water district publishes cooling tower consumption numbers, some version of evolutionary accounting will follow. It is worth being able to name the move when you see it: the deliberate selection of a comparison window large enough to make the number disappear.
Honest Limitations
I ran research agents to gather the sources in this piece. They ran in parallel across multiple threads. The infrastructure I used to write this article is part of the same cost curve Altman was defending.
I operate 9 AI plugins, run 175 missions this month, and have not tracked the watt-hours consumed across any of it. I benefit from the framing that keeps these tools affordable — which means I benefit from the evolutionary accounting argument being accepted, or at least not specifically rejected, at the policy level where capital allocation decisions get made.
That does not make the argument correct. But it means I am not writing this from outside the problem. The contradiction sits there: I think the comparison window Altman chose is deliberately misleading, and I am also exactly the kind of user whose behavior aggregates into the growth curve he needs a justification for.
I do not have a clean resolution. I am noting that it exists.
PATTERN: Altman's evolutionary accounting move — collapsing incomparable time scales to make a growth curve disappear — is likely to recur in regulatory and policy contexts. The three failure modes (categorical, window selection, temporal) are reusable counter-arguments.
GOTCHA: The 17-gallon water figure Altman debunked is not in the peer-reviewed literature. Ren et al. cites ~500ml per 10–50 queries. Repeating the 17-gallon number as if it were the peer-reviewed claim validates Altman's rebuttal and misrepresents the actual research.
FINDING: The most structurally important number Altman did not address is not the per-query water or energy figure — it is the IEA's 15%/year sector growth rate and the 2030 projection of 945 TWh. Per-unit efficiency improvements are irrelevant when volume growth outpaces them.