AI's Data Center Thirst Puts 2030 Net Zero Goals in Doubt

A new forecast says AI data centres won't hit net zero by 2030 without big shifts in siting, clean energy, and efficiency. Smarter choices could slash emissions and water use.

Published on: Nov 12, 2025
AI's Data Center Thirst Puts 2030 Net Zero Goals in Doubt

AI Data Centres: New Forecast Says 2030 Net Zero Goals Are Off Track

AI adoption is surging, and so are the demands on energy and water. A new forecast suggests the industry won't hit net zero by 2030 without serious changes to where we build, how we power, and how efficiently we run AI infrastructure.

Researchers led by Fengqi You at Cornell modelled the energy, water and carbon footprint of leading AI servers through 2030 across multiple growth scenarios and U.S. locations. They combined projected chip supply, server power draw, cooling efficiency and state-by-state grid mixes to map out what's coming.

What the numbers say

By 2030, U.S. AI server buildouts could require 731 million to 1.125 billion additional cubic metres of water. Annual emissions could land between 24 and 44 million tonnes of CO2-equivalent. The range depends on AI demand growth, how many high-end servers manufacturers can ship, and where new data centres are placed.

That last variable matters a lot. Grid carbon intensity and water availability vary widely by state, which means two otherwise identical facilities can have very different footprints.

Three levers that move the needle

The team found three practical levers that meaningfully reduce environmental costs:

  • Location: Midwestern states generally offer more water and cleaner grids than many coastal or sunbelt hubs.
  • Cleaner energy: Decarbonising supply (new renewables, storage, firm low-carbon power) cuts emissions at the source.
  • Efficiency: Better server utilisation, model efficiency, and smarter cooling reduce both energy and water use.

Together, these moves could cut sector emissions by up to 73% and the water footprint by up to 86% in the model's scenarios.

Communities are pushing back

Local opposition is already reshaping buildouts. Virginia hosts roughly one-eighth of global data centre capacity, and residents are challenging further expansion over water and environmental concerns. Similar petitions have appeared in Pennsylvania, Texas, Arizona, California and Oregon.

Data Center Watch estimates about $64 billion worth of projects have been stalled by local resistance. How much water and power these sites would have used remains uncertain, which is why solid forecasts like this are getting attention.

Experts: progress, skepticism, and a call for transparency

Sasha Luccioni at Hugging Face notes that forecasting is tricky because breakthroughs can change compute needs overnight-she points to work like DeepSeek that trims brute-force computation. Her push: require developers to track and report compute and energy use, share it with users and policymakers, and commit to reductions across the board.

Chris Preist at the University of Bristol agrees on investing in new renewable capacity and says location choices are pivotal. He also argues the study's assumptions about direct water cooling may be pessimistic and that its "best case" looks more like current best practice.

What IT, engineering, and ops teams can do now

  • Site selection: Prioritise regions with lower grid carbon intensity and resilient water supplies. Avoid water-stressed basins where possible.
  • Procure additional clean power: Use long-term PPAs and add storage to improve hourly matching. Focus on additionality, not just certificates.
  • Measure and disclose: Track energy, emissions (location- and market-based), PUE and WUE. Publish per-train and per-inference metrics in model cards and internal dashboards.
  • Right-size the workload: Apply pruning, distillation, quantisation and caching. Schedule training for low-carbon hours. Reuse checkpoints and share foundation models internally to reduce redundant runs.
  • Cooling choices: Prefer closed-loop systems, air-side economisation where feasible, and liquid cooling designed to minimise make-up water. Consider heat reuse to nearby facilities.
  • Utilisation first: Maximise GPU occupancy, consolidate clusters, and retire underutilised hardware. Implement autoscaling and priority queues to avoid idle capacity.
  • Targets and governance: Set interim goals for 2026-2028, align with science-based targets, and use an internal carbon price to guide procurement and architecture decisions.

Why this matters for your roadmap

Net zero by 2030 won't happen by accident. Technology choices, siting, and energy procurement are product decisions now. Teams that design for efficiency and transparency will ship more compute for less cost-and face fewer community and regulatory roadblocks.

Source and further reading: See the study in Nature Sustainability for methodology and scenarios: Projected energy, water and carbon footprints of AI data centres by 2030.

If you're leading AI initiatives and want to upskill your team on efficient AI workflows and MLOps, explore our curated training paths: AI courses by job role.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)