Turner Construction Doubles Data Center Revenue as AI Projects Drive 40% of Backlog
Turner Construction's data center business has flipped into a primary growth engine. Revenue jumped from $3.6 billion in 2024 to $6.4 billion in the first nine months of 2025, and the company expects around $9 billion for the full year. Nearly 40% of Turner's $40.3 billion total backlog is now tied to data centers.
The company is executing work for 30+ clients across 250+ projects worldwide. The driver is clear: AI compute and cloud capacity are scaling faster than traditional delivery models, pushing owners and builders to lock in power, equipment, and labor years ahead.
Numbers That Matter
- $6.4B data center revenue in the first nine months of 2025; tracking to ~$9B for the year
- ~40% of a $40.3B backlog linked to data centers
- 250+ projects for 30+ clients across multiple regions
- Fresh orders also piling up in semiconductor manufacturing, adding another growth layer
Key Projects Fueling the Build-Out
- $15B data center complex in Wisconsin under OpenAI's Stargate program (Turner involved)
- $6B, 100 MW AI-focused facility in Pennsylvania for CoreWeave (Turner involved)
- 64 MW high-density, liquid-cooling-ready data center in Cyberjaya (delivered by Leighton)
- Major data center project in Madrid (Turner with Dragados)
Why This Matters for Developers, Owners, and GCs
Turner's vice president Chris McFadden notes clients are already placing advance orders for critical mechanical and electrical systems that won't arrive until 2027. That's the tell. Demand is outpacing standard procurement cycles, and the firms that secure long-lead items early will control delivery timelines.
Turner is partnering with customers and suppliers to protect schedules and pre-empt supply bottlenecks. If you're planning capacity, assume extended lead times and structure contracts around availability, not wishful dates.
Long-Lead Items to Lock Early
- Large power transformers, generators, MV switchgear, UPS, PDUs, busway
- Chillers, adiabatic/dry coolers, cooling towers, CRAH/CRAC units
- Liquid cooling gear: CDUs, rear-door heat exchangers, pumps, heat exchangers
- High-capacity breakers, controls, BMS/SCADA components, fiber and network gear
Design Shifts: High Density and Liquid Cooling
The Cyberjaya win highlights where designs are headed: high density with liquid-cooling readiness. AI and HPC racks at 50-100 kW+ are pushing beyond air-only solutions, leading to warm-water loops, CDUs, rear-door heat exchangers, and tighter integration between MEP and IT.
Expect more hybrid builds: air for general workloads, liquid for AI clusters. Plan for serviceability, water quality management, leak detection, and skilled labor familiar with pipefitters' scopes inside white space. For context on adoption and trade-offs, see the Uptime Institute's research reports here.
Site and Utility Strategy
- Power first: secure interconnects, capacity reservations, and LOAs early. Large power transformers remain tight; see DOE's supply chain updates here.
- Fiber and latency: dual-path, carrier diversity, and proximity to AI clusters or cloud on-ramps.
- Water: rights, reuse, or air-cooled strategies based on climate and ESG commitments.
- Permitting: engage AHJs early; pre-file where possible; align on noise, heat rejection, and traffic.
Delivery Playbook That's Working
- Modular and prefab MEP to compress schedules and reduce onsite risk
- Framework agreements with OEMs to smooth allocations and pricing
- Phased capacity (pods/blocks) to match power availability and IT ramp
- Commissioning depth: FAT where possible, staged IST, repeatable SOPs/MOPs
- Risk sharing: milestone structures that reflect material release and grid timelines
What This Signals for Capital Allocation
ACS, Turner's parent group, is leaning into digital infrastructure while adding semiconductor work on top. For investors and owners, the pairing of data centers + fabs creates a multi-year window for technical construction with higher MEP intensity and longer planning horizons than typical commercial projects.
Clients are booking capacity years in advance, and pipelines stretch beyond 2027. If you need compute-ready space, the clock isn't your friend-your advantage comes from how early you move on power, gear, and partners.
Next Steps for Your Team
- Lock utility conversations now; treat transformers and switchgear like critical path items.
- Design for density: reserve water loops and whitespace for future liquid-cooling zones.
- Standardize one or two repeatable MEP topologies to scale across sites.
- Secure OEM allocations and second sources; don't rely on single vendors for critical gear.
- Phase builds to match IT demand and power delivery; don't let gear sit or compute wait.
Upskill for AI-Driven Projects
If you're standing up a team to deliver AI-ready facilities, a baseline understanding of AI workloads helps decision-making around density, networking, and cooling. Explore curated learning paths by role here.
Bottom line: Turner's numbers show where the market is headed. The firms that secure power, lock long-leads, and build repeatable delivery models will capture the surge in AI infrastructure spend while everyone else waits on equipment.
Your membership also unlocks: