AI-scale compute is forcing a rethink of data centre water and energy use
As AI demand surges, data centres are drawing far more electricity and water. Water is used inside facilities to keep servers running fast and stable without overheating or downtime. That puts water scarcity and sustainability squarely on the agenda for operators and boards.
Ecolab, a global sustainability company, works with data centres to cut industrial water use and capture reduction potential. In a recent discussion, Paul Overbeck, senior corporate account manager for global data centres at the company, explained how operators can meet rising thermal loads without sacrificing reliability or environmental goals.
How Ecolab helps operators cut risk and consumption
Ecolab partners with hyperscalers, colocation providers, integrators and OEMs to codify pre-commissioning and operational best practices. The aim is simple: standard, repeatable methods that reduce risk across the cooling stack.
"Ecolab is focused on helping operators combine pre-commissioning discipline, chemistry expertise and real-time telemetry to make liquid cooling predictable and scalable," Overbeck said. He highlighted growth in rack-level monitoring and telemetry, pre-commissioning services like flushing and staged filtration, closed-loop water conservation strategies, and advisory programs that map cooling choices to sustainability targets.
The team takes a broad systems view. Start with site selection and available water resources. Design the right cooling topology for that location. Put commissioning protocols in place to prevent day-one issues. Then keep the system on track with continuous, real-time monitoring.
"The objective is practical outcomes with fewer interventions, optimised energy and water use, and predictable capacity growth," he added.
Standard methods and telemetry that make cooling repeatable
Ecolab contributes to the Open Compute Project, a community that rethinks hardware to meet growing compute demand. "We're working to standardise pre-commissioning steps, cleanliness acceptance criteria, and instrumentation practices for the industry as a whole," Overbeck said. "That collaboration is helping us create repeatable checklists and measurable handover criteria so operators can deploy liquid cooling at scale with consistent sustainability and reliability outcomes."
Ecolab's 3D TRASAR technology gives live visibility into key coolant health indicators: pH, conductivity, turbidity, glycol concentration and temperature. "That minute-by-minute view converts what was once occasional lab checks into an early-warning system, helping teams detect drift or contamination quickly and take action before an upset escalates to become a major problem," he noted.
AI loads change facility math
AI workloads drive higher rack densities and heat output, making cooling an operational priority. Researchers have estimated that data centres could withdraw more than one trillion gallons of fresh water annually by 2027. Forward-looking water management now sits at the centre of resilience planning.
Overbeck put it plainly: "The smartest designs balance both sides of the water-energy nexus, choosing cooling architectures and operating practices that support a data centre's compute capacity while minimising their overall environmental impact."
Commissioning is where many stumble
Too many teams treat commissioning as an afterthought. Overbeck pointed to recurring gaps that drive early failures and unnecessary interventions:
- System cleanliness is under-engineered.
- Staged filtration is insufficient, and flushing is done at low velocity.
- Test water is left stagnant after hydrotests.
- "Procedure water" specifications are not documented.
These issues lead to chemistry drift, biofilm growth, and particulates that clog cold plates. The result is overheating, reduced efficiency and reliability incidents. All of it is avoidable with a solid commissioning plan and clear acceptance criteria.
What executives should do next
The water discussion cannot live only in day-to-day operations. It belongs in site strategy, design choices and handover standards. Here is a practical checklist to align engineering decisions with business outcomes:
- Assess the site up front: local water availability, source quality, regulatory constraints and the carbon intensity of local electricity.
- Choose cooling architectures (air, hybrid, direct-to-chip) that fit local conditions. Plan for reuse, containment and side-stream treatment where practical.
- Set return-temperature targets and wastewater handling plans early. These decisions move sustainability metrics and long-term operating cost more than most tweaks later.
- Build standard pre-commissioning checklists across projects. Define cleanliness acceptance criteria and measurable handover requirements.
- Require rack-level telemetry and real-time chemistry monitoring on closed loops. Turn sporadic checks into continuous assurance.
- Establish incident playbooks tied to sensor thresholds so teams act before a minor deviation becomes a outage risk.
The management takeaway
AI will keep raising heat density and scrutiny on water. Leaders who standardise commissioning, instrument their loops, and pick cooling designs that fit their sites will protect uptime, lower water intensity and keep headroom for growth. This is an operations problem, yes-but it is also a strategy problem that deserves board time and clear targets.
If your leadership team is building AI fluency to make stronger infrastructure decisions, explore curated, role-specific programs here: AI courses by job role.
Your membership also unlocks: