How AI Is Forcing a Complete Rethink of Data Centre Strategy

AI is transforming data centre strategy by prioritizing speed, proximity, and agility for GPU-ready infrastructure. Low-latency connectivity and modular cooling are now essential for AI workloads.

Published on: Aug 27, 2025
How AI Is Forcing a Complete Rethink of Data Centre Strategy

AI is Reshaping Data Centre Strategy: Insights from Telehouse Executives

Artificial intelligence has moved beyond experimental stages and is now firmly embedded in production environments. This shift is changing how data centres are planned, constructed, and connected. The focus for data centre operators worldwide has shifted from simply providing floor space to rapidly deploying GPU-ready infrastructure close to end users.

Executives from Telehouse — Andy Fenton (VP Sales and Marketing, Canada), Ken Miyashita (Managing Director, Thailand), and Sami Slim (CEO, France) — share their perspectives on how AI influences strategy, infrastructure, and investment priorities.

AI Deployment Shifts Priorities

Sami Slim emphasizes that with AI entering real deployment phases, businesses prioritize speed, proximity, and agility over raw capacity. The key question is how quickly high-density GPU racks can be operational near users to support demanding AI workloads and data-heavy applications.

Data centre design now treats power, cooling, and fibre connectivity as an integrated challenge. Modular, scalable structures with built-in liquid cooling and dedicated fibre paths from the outset are becoming the norm.

Proximity Drives Performance and Value

Ken Miyashita highlights the critical role of physical proximity in reducing latency. Locating AI compute resources within the same metro network shortens fibre paths, cutting round-trip latency. This latency reduction benefits revenue-sensitive operations like real-time pricing, algorithmic trading, and personalized recommendations.

Industries such as finance, media, and gaming target latency in the single-digit milliseconds. Retailers utilize suburban edge sites for recommendation engines, while trading desks run live language translation on racks consuming over 100 kW within metro fibre rings. If GPUs were farther away, WAN latency would diminish the benefits of additional compute power.

Customers are increasingly willing to pay premiums for metro-level proximity that combines low latency, compliance with local data regulations, and reduced transit costs.

Future-Proofing Under Tight Timelines

Rapid infrastructure scaling—measured in months rather than years—is now a standard expectation. Slim notes that future-proofing begins with quickly establishing fundamentals: GPU racks supporting 100–130 kW, dark fibre paths avoiding congested exchange points, and adaptable cooling systems capable of transitioning from air to liquid as demand increases.

Connectivity is equally crucial. Research from Telehouse involving over 900 senior IT leaders found that over 90% consider direct cloud on-ramps essential for AI and machine learning, yet 55% have faced serious network challenges. Telehouse’s upcoming AI-ready 2MW module set to open in December 2025 exemplifies infrastructure designed to meet these needs.

In Thailand, Miyashita points out that carrier neutrality combined with a clear liquid-cooling roadmap is key to maintaining performance, reliability, and agility during rapid deployment.

Connectivity: From Optional to Essential

Andy Fenton stresses that connectivity expectations have hardened. Low-latency access is no longer optional but mandatory, especially in hybrid cloud and GPU-as-a-service scenarios.

For clients in Toronto, this means direct, high-performance connections across cloud, edge, and on-premises environments. Inference engines must connect within single-digit millisecond latency to cloud endpoints; otherwise, the AI business case weakens.

Facilities require same-day activation of cross-connects and diverse fibre routes that bypass congested nodes. Operators should build for expansion rather than retrofitting under time pressure.

Specialised Services Influence Buyer Decisions

Beyond space and power, specialised offerings now shape purchasing choices. Miyashita highlights two critical services: shared liquid-cooled GPU capacity and compliance expertise.

  • Shared liquid cooling lets customers scale from pilot projects to multi-tenant deployments without heavy upfront hardware costs.
  • Compliance expertise ensures training data complies with local regulations while enabling access across neighboring markets.

Both require strong engineering skills and careful power budget management to ensure seamless scaling.

Buyer Questions Reveal Changing Priorities

As AI workloads grow, buyers focus on tangible proof rather than promises. Fenton explains that experienced customers demand evidence of live connectivity, active carriers, cloud on-ramps, and concrete latency guarantees within contracts.

Cooling strategies are scrutinized, with walkthroughs on transitioning from air to liquid cooling becoming part of evaluations. Transparency in efficiency metrics such as Power Usage Effectiveness (PUE), water consumption, and carbon emissions also influences decisions.

Miyashita adds that customers closely examine current rack power limits and upgrade timelines. Providers offering clear, detailed responses demonstrate readiness for future growth.

Investment Drivers Beyond 2026

Looking ahead, AI and sustainability will guide capital allocation. Slim identifies two trends: the integration of generative AI assistants into mainstream tools and a shift from static sustainability reporting toward real-time operational dashboards.

Operators capable of hosting 100 kW+ racks on dark fibre city-centre rings, combined with heat recovery and granular metering, will attract new demand.

Miyashita points to joint builds as a growing trend, where telcos, cloud providers, and operators share infrastructure and risks. Campuses with clear scaling paths for power and cooling technologies will be most appealing.

Fenton underscores interconnection as critical: facilities lacking low-latency, direct cloud and edge routes will struggle to compete regardless of power or space.

AI Accelerates Data Centre Evolution

Across continents, a clear theme emerges: AI is speeding up change in the data centre industry. Success relies less on space and power alone and more on deployment speed, network flexibility, and proximity to users.

From Paris to Bangkok to Toronto, Telehouse executives agree that future-proof designs, transparent performance metrics, and specialised services will keep customers ahead in their AI endeavors.

With milliseconds directly affecting revenue, the ability to rapidly deploy high-density, low-latency infrastructure will define industry leaders going forward.

For executives looking to deepen their understanding of AI infrastructure and strategy, exploring targeted AI courses can provide practical insights and skills. Resources such as Complete AI Training’s latest AI courses offer relevant learning paths tailored to evolving industry needs.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)