How Data Centres Are Preparing for AI-Driven Finance
AI is changing how finance consumes compute. Banks, insurers and asset managers are pushing for more capacity, lower latency and tighter controls. Data centre operators are reworking power, cooling and HPC strategies to keep pace while staying inside regulatory and sustainability guardrails.
This shift will be a core theme at our upcoming breakfast roundtable in New York, where industry leaders will discuss how digital infrastructure affects AI-driven change across the sector. If your teams rely on trading models, risk engines or compliance analytics, this is your moment to pressure-test your infrastructure plan.
Powering AI-Driven Finance
AI workloads in trading, risk and compliance are climbing fast, with forecasts that AI-ready capacity will grow more than 30% per year to 2030. Investment banks and the cloud providers that serve them are committing hundreds of billions to new builds, with individual programmes now measured in tens of gigawatts.
That scale is forcing new choices on site selection, power procurement and grid strategy. Operators are locking in long-term PPAs, exploring on-site generation and even evaluating nuclear or advanced gas to guarantee steady supply for latency-sensitive HPC workloads. Expect grid-interconnection lead times and regional energy policy to become competitive advantages (or constraints) for financial institutions.
Cooling Next-Gen Financial Compute
Traditional air cooling is hitting its limits as AI clusters push rack densities far beyond historical norms-especially for high-frequency trading, real-time risk and fraud analytics. Tam Dell'Oro, Founder of Dell'Oro Group, notes: "The proliferation of accelerated computing to support AI and machine learning workloads has emerged as a major driver in the data centre physical infrastructure market.
"AI workloads require significantly higher power densities, with rack power needs rising from an average of 15 kW today to between 60 kW and 120 kW in the near future. This shift is accelerating the industry-wide transition from air to liquid cooling."
To meet that reality, operators are rolling out direct-to-chip liquid cooling, rear-door heat exchangers and, in some cases, full immersion systems. These approaches reduce cooling energy, allow higher density and help operators hit tougher efficiency and emissions targets set by regulators and investors. For guidance on best practices, see ASHRAE's data centre resources here.
Scaling HPC for Finance
Financial firms are moving from pilots to production platforms for credit decisioning, portfolio optimisation and generative client communications. That shift is driving growth in colocation sites and cloud regions near major financial hubs to cut latency between front-office apps and back-end AI clusters.
New builds and retrofits centre on dense GPU clusters, high-speed interconnects and storage tuned for large language models and market data lakes. Operators that offer flexible, high-density suites-with clear paths to grow from a few hundred kilowatts to multi-megawatt blocks-are becoming strategic partners to banks and insurers refining their data centre AI strategy.
Modernisation, Regulation and Risk
Supervisors now expect operational resilience, data sovereignty and strong controls over AI systems. That pushes institutions to modernise not just for speed, but for governance: where data is processed, how models are monitored and how failover is handled across regions and providers.
Deloitte asks, Can US infrastructure keep up with the AI economy? Their analysis highlights grid capacity constraints, extended power connection timelines and skilled labour shortages as critical blockers. The report suggests technology innovation and strategic partnerships are key to scaling effectively. Read more from Deloitte here.
Leaders are already voicing the stakes. George Tziahanas, Vice President of Compliance, Archive360, says: "AI needs power and lots of it. Government decisions around energy policy and data centre infrastructure will create significant regional advantages or disadvantages in AI capabilities. Countries that cannot provide sufficient power for data centres will fall behind in the global AI race, regardless of their regulatory frameworks".
What Finance Leaders Should Do Next
- Map workload classes to latency and density needs (trading, risk, AML, genAI) and match them to on-prem, colo or cloud footprints.
- Run TCO and time-to-capacity scenarios for GPU-dense builds, including liquid cooling and grid-interconnection lead times.
- Secure energy strategy early: PPAs, on-site generation options and demand-response programs.
- Design for resilience and sovereignty: multi-region failover, clear data residency, and auditable model monitoring.
- Negotiate scalable capacity blocks (hundreds of kW to multi-MW) with defined power and cooling upgrade paths.
- Incorporate sustainability metrics into procurement: PUE, WUE, emissions factors and heat-reuse opportunities.
Why This Matters Now
As AI becomes foundational to financial services, infrastructure choices will influence speed to market, model quality and regulatory confidence. The firms that act early-on power, cooling, and HPC scale-will set the pace for the rest of the sector.
Join the conversation: Register your interest to meet senior financial services and technology leaders at Breakfast at Tiffany's on 29 January 2026.
Want a quick overview of practical tools finance teams are using today? Explore this resource: AI tools for finance.
Your membership also unlocks: