The Energy Bill for AI Infrastructure Is Coming Due
The International Energy Agency predicts datacentre energy demand will more than double by 2030. Electricity consumption from AI-optimised datacentres is projected to more than quadruple in the same period.
This is creating immediate problems. In the US, rising power demands from datacentres are being directly blamed for residential electricity price increases, according to a Consumer Affairs analysis of federal energy data.
Communities are pushing back against new datacentre developments. Chip makers are making the problem worse - Nvidia's roadmap assumes 1MW racks will soon be standard, and the company is moving datacentres from 48V or 54V DC power to 800V DC systems. More powerful GPUs will require more storage, networking, and cooling.
Where enterprises actually stand
IT and business leaders face real pressure, but the scale of the problem may be smaller than headlines suggest. Enterprise AI workloads represent a fraction of total cloud and datacentre consumption. Most datacentre power still goes to standard compute workloads, not AI.
The IEA estimated that in 2024, AI was responsible for only 15% of datacentre energy demand. Video streaming - cat videos especially - consumes far more resources than enterprise AI systems.
That said, inferencing energy use (running AI models rather than training them) is projected to almost double by 2030, reaching 162.5TWh. This creates both a challenge and an opportunity for cost and carbon reduction if efficiency gets priority from design through deployment.
Three practical paths forward
Centralise around power-rich locations. Nscale, a European cloud operator, centres its datacentre network on Norway for access to cold climate cooling and abundant hydroelectric power. This approach is working - the combination of centralisation and engineering expertise from cloud providers is driving down power demands.
Run specialised models closer to your data. Companies are downloading open-source AI and running it internally rather than relying solely on cloud providers. Specialised models are far more efficient than general-purpose systems like ChatGPT. Edge locations often have pre-existing power infrastructure, avoiding years-long waits for new grid connections.
HPE's principal for sustainable transformation said enterprises are "increasingly looking at where AI runs and how efficiently it can be deployed closer to their data and operations."
Treat efficiency as a design requirement from day one. Software-driven optimisation ensures compute is fully utilised and prevents wasted power from idle or over-provisioned infrastructure. Organisations adopting modern software-defined infrastructure have reported energy reductions of around 50% compared to legacy systems.
Nutanix's director of systems engineering emphasised workload optimisation: companies need to modernise infrastructure to reduce consumption and improve utilisation to avoid over-provisioning.
The Jevons paradox problem
As AI infrastructure becomes more efficient, demand will likely increase. This mirrors a pattern identified by economist William Stanley Jevons in 1865 - more efficient coal engines led to more coal consumption, not less.
The same dynamic applies to AI. More efficient systems will enable more use cases, potentially offsetting efficiency gains.
One expert suggested the real question is transparency. Food labelling doesn't force people to eat less sugar, but it lets them make informed decisions. The same approach could work for datacentre energy use - making the carbon impact of AI consumption visible so organisations can choose accordingly.
For now, the burden falls on IT leaders to build efficiency into infrastructure decisions. The alternative is watching energy costs and carbon footprints climb alongside AI adoption.
AI for Operations training can help teams understand the practical trade-offs between capability and efficiency.
Your membership also unlocks: