Vertiv Frontiers 2026: AI Drives Gigawatt-Scale Data Centers with Digital Twins, High-Voltage DC and Adaptive Liquid Cooling

AI-era data centers need a new ops playbook: denser racks, gigawatt builds, whole-facility compute, and mixed chips. Electrical, thermal, and deployment must advance together.

Categorized in: AI News Operations
Published on: Jan 09, 2026
Vertiv Frontiers 2026: AI Drives Gigawatt-Scale Data Centers with Digital Twins, High-Voltage DC and Adaptive Liquid Cooling

AI-era data centers: what operations teams need to plan for now

AI is forcing a new playbook for data center design and day-to-day ops. A new report from Vertiv points to four macro forces: extreme rack density, gigawatt-scale buildouts, treating the facility as a single compute system, and a wider range of chips driving mixed requirements.

If you own uptime and capacity, the message is simple: electrical, thermal, and deployment practices have to advance together. Waiting for "next year's refresh" won't cut it.

Macro forces to account for

  • Extreme densification: GPU-heavy racks push beyond conventional air cooling and legacy PDUs.
  • Gigawatt scale at speed: sites need to be planned, permitted, and delivered far faster, often in modular blocks.
  • Data center as a unit of compute: treat the facility as an integrated system, not a collection of rooms.
  • Silicon diversification: CPUs, GPUs, and accelerators with very different thermal and electrical profiles must coexist.

1) Scale electrical infrastructure for AI

Most sites still run hybrid AC/DC distribution with multiple conversion stages. As rack loads climb, losses and copper size become a tax on both cost and speed.

Higher-voltage DC at the room level reduces current, cuts conversion steps, and can simplify distribution. On-site generation and microgrids make the case even stronger.

  • Map current and near-term rack densities; set thresholds where AC-only becomes a bottleneck.
  • Evaluate 380-400V DC architectures for new halls; build hybrid AC/DC migration plans for retrofits.
  • Standardize busway and connector types to shorten deployment intervals.
  • Run a loss audit end-to-end (grid to rack) to quantify savings from fewer conversions.

2) Distributed AI changes where inference runs

Training may live in hyperscale sites, but inference will be mixed: cloud, colocation, and on-prem-especially for regulated sectors with latency, security, or data residency constraints.

Ops teams should plan for local, high-density capacity that can spin up without rebuilding the entire facility.

  • Pre-qualify liquid-ready racks and CDUs for brownfield spaces.
  • Create standard "AI blocks" (racks, power paths, cooling loops) that can be repeated.
  • Tune network and storage for low-latency inference adjacent to data sources.
  • Align controls and monitoring so cloud and on-prem clusters can be operated as one environment.

3) Energy autonomy moves from backup to baseline

Grid constraints are real. More sites are adding extended on-site generation-often natural gas turbines or multi-fuel-to meet capacity timelines and stabilize costs.

Think beyond generator-as-backup to multi-day autonomy and integrated thermal planning.

  • Run a feasibility study for "bring your own energy (and cooling)" including interconnect, permitting, and emissions.
  • Model capex/opex trade-offs for turbines, fuel cells, and storage; include heat recovery options.
  • Design protection and controls so on-site sources and utility supply operate safely in all modes.
  • Pre-negotiate fuel and maintenance SLAs sized to AI loads, not legacy baselines.

4) Digital twins for design and daily ops

Digital twins let you specify the site virtually, integrate IT with MEP, and deploy prefabricated modules as repeatable compute units. This can cut time-to-token by up to 50%.

The real win is ongoing operations: test changes in the twin before touching the floor.

  • Build the twin from the start: electrical one-line, thermal models, controls logic, and failure modes.
  • Integrate DCIM, EPMS, and BMS streams to keep the twin synced with reality.
  • Use the model for change control, what-if scenarios, and staff training.
  • Standardize a library of modules (power rooms, cooling blocks, AI pods) to speed repeat builds.

5) Adaptive liquid cooling gets smarter

Liquid is now essential for many GPU racks. The next step: smarter controls that predict failures, balance loops, and protect components in real time.

Add sensors, telemetry, and analytics so the cooling system can self-correct under load swings.

  • Decide on approach by workload and facility: rear-door heat exchangers, direct-to-chip, or immersion.
  • Instrument for flow, pressure, temperature deltas, and water chemistry; set automated safeties.
  • Design isolation, filtration, and leak response into every loop.
  • Train ops on maintenance cycles specific to liquid systems (cleanliness, gasket life, coolant quality).

90-day action plan for operations leaders

  • Baseline: rack densities, electrical losses, cooling headroom, and grid constraints.
  • Decide: your path to liquid cooling and your DC voltage strategy for the next two builds.
  • Pilot: a small digital twin connected to live telemetry for one hall.
  • Prep: standard AI "block" (racks + electrical + cooling) that can be repeated on demand.
  • Assess: on-site energy options with a business case and permitting path.

If you want background reading, the Open Compute Project's work on 48V DC is useful context: OCP high-voltage DC resources. For cooling guidance, see ASHRAE's data center resources: ASHRAE data center guidance.

Need your team to get fluent in AI concepts that impact capacity planning and operations? Browse role-based options here: AI courses by job.

Vertiv provides critical digital infrastructure across more than 130 countries, spanning electrical systems, thermal solutions, and IT infrastructure from cloud to edge. Learn more at Vertiv.com.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide