The hidden cost of artificial intelligence

How Thirsty
Is Your AI?

Every AI query evaporates freshwater to cool the servers that power it. One teaspoon at a time, multiplied billions of times a day, it adds up to rivers.

3–5ml
Freshwater per AI query
2.5B
ChatGPT prompts sent daily
+48%
Google water rise since 2021
Scroll to explore

One question.
One teaspoon.
~5ml

That's the mid-range estimate for a single AI query β€” accounting for both the water evaporated cooling server racks and the water used by power plants generating the electricity.

Why estimates vary 100Γ—: Companies like OpenAI and Google report only direct cooling water (~0.3 ml). Independent researchers include the water consumed generating electricity β€” typically 6–12Γ— larger. Scope is everything.
β‰ˆ 1 TEASPOONper AI query
Mid-range estimate including
direct + indirect water use

Not all AI models
drink the same.
Water use varies dramatically by model size, task complexity, and whether you count just the cooling tower or the whole grid.
Google GeminiMedian text prompt
0.26 ml
ChatGPTAltman official figure
0.32 ml
GPT-4oIncluding indirect water
~3.5 ml
ClaudeNo published data Β· efficient infra
~3 ml*
CopilotUses OpenAI models
~3.5 ml*
GPT-3Original study estimate
~30 ml
GPT-5Including indirect water
~32 ml
Mistral Large 2Full lifecycle (LCA)
45 ml
* Estimated based on infrastructure provider efficiency. No official disclosure.

🏭 Direct Water (On-site)

Water evaporated in cooling towers at the data center itself. This is what companies self-report: ~0.3 ml per query.

⚑ Indirect Water (Grid)

Water consumed by power plants generating the electricity. Typically 6–12Γ— larger than direct cooling and almost never disclosed.

2.5 billion queries a day.
Every teaspoon counts.

At ChatGPT's current volume alone, daily inference water consumption matches GPT-3's entire training footprint β€” every single day.

8.75M L
Daily inference water
3.2B L
Yearly inference water
1.2M
People's annual drinking water
What does a teaspoon look like at scale?
Comparing AI queries to everyday water use helps ground the numbers. Here's how many queries equal familiar activities.
Glass of water
= EQUALS =
~69
AI queries
8-min shower
= EQUALS =
~21,400
AI queries
One tomato
= EQUALS =
~3,700
AI queries
Cup of coffee
= EQUALS =
~40,000
AI queries
Google search
= EQUALS =
7–10Γ—
less water than AI query
One hamburger
= EQUALS =
~700K
AI queries

Big Tech's rising
water bill.
Every major AI company's water consumption has surged since 2021. Amazon won't even publish its numbers.
Company202120222023Growth
Google
16.3B L19.7B L24.2B L+48%
Microsoft
4.8B L6.4B L7.8B L+63%
Meta
~2.7B L~2.9B L3.1B L+15%
Amazon
29.1B L*Not disclosedNot disclosedUnknown
xAI
N/AN/A~1.8B L (est.)No reports
* Amazon figure from leaked internal documents (SourceMaterial/The Guardian). Includes indirect electricity water.
2021
2022
2023
Google
Microsoft
Meta
More efficient.
More water.

Every company has dramatically improved water efficiency per unit of compute. But demand is scaling faster than efficiency gains β€” a textbook Jevons Paradox.

πŸ“‰
–39%
Microsoft WUE improvement
(0.49 β†’ 0.30 L/kWh)
πŸ“‰
33Γ—
Google energy reduction per Gemini prompt (2024–2025)
πŸ“ˆ
250Γ—
ChatGPT daily query growth
(10M β†’ 2.5B, 2023–2025)
Training is a flood.
Inference is the rising tide.
Training GPT-3 consumed ~5.4 million liters. Massive β€” but ChatGPT's daily inference matches that every single day.
GPT-3 Training
5.4M L
One-time cost
~90 days
vs
Daily Inference
8.75M L
Every single day
1.6Γ— training / day
80–90% of all AI compute goes to inference, not training. The headlines focus on training costs, but it's the billions of daily queries that define AI's true water footprint.

Awareness is the
first drop.

AI's water footprint is simultaneously smaller and larger than headlines suggest. A single query's 3–5 ml is modest compared to a cup of coffee. But 2.5 billion daily queries transform teaspoons into rivers. The trajectory β€” not the snapshot β€” is what matters most.

Ask with intention

Each query has a cost. Be specific. Reduce back-and-forth. Use simpler models for simple tasks.

Demand transparency

Only Google and Mistral have published detailed per-query disclosures. Push other providers for full-scope data.

Support local communities

Data centers concentrate in specific regions. From Memphis to The Dalles, communities bear costs they didn't choose.