Global Data Centre Electricity Demand Set to Double by 2030, Driven by AI
New analysis from Gartner projects global data centre electricity consumption will climb from 448 TWh in 2025 to 980 TWh by 2030. That surge puts real pressure on grid capacity, siting decisions, and operating models across IT, development, and operations.
The headline numbers
AI-optimised servers are the main driver. Their consumption is expected to jump from 93 TWh in 2025 to 432 TWh by 2030.
By the end of the decade, these systems are set to account for 44% of total data centre power usage, up from 21% in 2025. They represent 64% of the incremental load added through 2030.
Why demand is accelerating
Training and serving large models needs dense accelerators, higher rack power, and more aggressive cooling. Even with efficiency gains per chip and better utilisation, total load grows as AI adoption scales across products and workflows.
Expect higher average rack densities (30-100 kW+), greater use of liquid cooling, and tighter integration between workload scheduling and facility constraints.
Regional outlook
The US and China are set to dominate AI infrastructure, together accounting for over two-thirds of global data centre electricity consumption. China appears slightly better positioned due to more efficient servers and stronger infrastructure planning.
In the US, data centre electricity use as a share of regional consumption is forecast to rise from 4% in 2025 to 7.8% by 2030. Europe's share is projected to increase from 2.7% to 5% over the same period.
Energy mix: what changes, what doesn't
Near term, natural gas remains the primary on-site source for reliability. Battery energy storage systems (BESS) are expected to see strong growth to smooth solar and wind variability and reduce generator runtime.
Clean alternatives for microgrids are moving from pilots to early adoption by decade end: green hydrogen, geothermal, and small modular reactors (SMRs). Geothermal shows promise but faces high upfront costs and regulatory hurdles, so it's likely to stay niche for now.
What this means for IT, Dev, and Ops
- Plan for power as a first-class constraint. Model site-level MW needs, grid interconnect timelines, and 2x energy scenarios through 2030.
- Right-size AI clusters. Separate training and inference capacity, enforce power caps, and schedule non-urgent jobs to low-carbon or off-peak windows.
- Raise efficiency. Target PUE ≤1.2 where feasible, adopt liquid cooling in high-density zones, and use high-efficiency PSUs and airflow management.
- Code smarter. Apply quantisation, sparsity, and distillation; tune batch sizes; use mixed precision; cache embeddings; profile and compile kernels.
- Instrument everything. Per-rack metering, DCIM integration, and accelerator utilisation telemetry into your observability stack.
- Procure with intent. Include Scope 2 clauses, pursue PPAs, and move early in interconnect queues; specify EPEAT/ENERGY STAR where applicable.
- Build resilience. Size BESS for n-hour autonomy, keep gas gensets as a bridge, and test black start. Evaluate SMR or hydrogen pilots where policy allows.
- Choose smart locations. Favor regions with spare grid capacity, cooler climates, strong renewables, and adequate water for cooling.
- Watch the economics. Track LCOE, grid fees, carbon intensity, curtailment risk, and potential demand response revenue.
Key metrics to track
- AI share of data centre load (expectation: ~44% by 2030 baseline)
- PUE and WUE at site and fleet level
- Rack density distribution and cooling envelope
- kWh per training run and per 1k inferences
- Accelerator utilisation (%) and idle time
- Carbon intensity (gCO2e/kWh) by site and over time
Practical next steps
- Run a 36-60 month power budget, including AI-driven growth and grid connection risk.
- Pilot liquid cooling on one high-density pod; measure PUE/WUE before and after.
- Enable energy-aware schedulers and power caps in your MLOps stack.
- Stand up BESS at one site to cut generator hours and firm renewables.
- Negotiate PPAs or tariff changes where you have predictable loads.
Skill up your team
If your roadmap includes AI workloads, upskilling on efficiency, MLOps, and infra planning pays off quickly. These curated resources can help align cross-functional teams.
The takeaway is simple: AI is shifting data centre energy from important to critical. Treat electricity like capacity, schedule like a resource, and build an energy strategy that can scale without surprises.
Your membership also unlocks: