Oracle's $35bn AI and Cloud Buildout Goes Global

Oracle is spending big on data centers to meet AI demand and make multicloud work for firms. OCI's bare metal GPUs and global builds aim to lock in capacity and growth.

Published on: Oct 29, 2025
Oracle's $35bn AI and Cloud Buildout Goes Global

Oracle's Data Centre Strategy: Cloud, AI and More Capacity

Oracle is pouring billions into data centres to meet AI demand and support a multicloud future. That's a sharp turn for a company that arrived late to cloud-after Larry Ellison once called it "gibberish." Today, the bet is clear: build capacity fast, win enterprise AI workloads, and make multicloud practical for customers.

What changed-and why it matters

Oracle Cloud Infrastructure (OCI) was rebuilt around enterprise requirements: high performance, strict isolation, and predictable cost control. Bare-metal instances give customers hardware access without a hypervisor-useful for databases, high-performance computing, and regulated workloads. Security was a core design principle of the Gen 2 architecture, not an add-on.

As Safra Catz put it: "We know better than anyone what it takes to run the full stack of technology that goes into mission-critical workloads."

Built for enterprise and AI workloads

OCI now targets AI training and inference at scale. The platform offers bare-metal GPU instances built on Nvidia's Blackwell and AMD's MI300X accelerators, with RDMA over Converged Ethernet to keep GPU-to-GPU communication latency low during large-scale training.

A multi-year deal with OpenAI validated this direction, and more AI labs are adopting multicloud to secure enough GPU capacity. For context on the hardware stack, see Nvidia Blackwell and AMD MI300X.

A construction program with global reach

Oracle's capital expenditures were about US$6.9bn in 2024 and climbed to roughly US$21.2bn in 2025. The company projects nearly US$35bn in fiscal 2026, primarily for data centre equipment. Larry Ellison said, "We're bringing on enormous amounts of capacity over the next 24 months," noting one new US AI facility is sized to fit eight Boeing 747s nose-to-tail.

Spending is spread worldwide: over US$8bn in Japan, more than US$6.5bn in Malaysia, and US$3bn across Germany and the Netherlands. The builds support data sovereignty laws by placing full cloud stacks in-country-or even inside a customer's own data centre-through Oracle's distributed cloud portfolio.

What the numbers signal

Momentum is visible in the financials. In Q4 FY2025, IaaS revenue reached US$3bn, up 52%. Safra Catz projected Cloud Infrastructure growth accelerating from 50% in FY2025 to over 70% in FY2026. Remaining Performance Obligations (RPO) jumped from US$80bn in Q3 2024 to US$455bn by Q1 FY2026-an increase of 359% year over year.

Her view on demand: "We expect to continue receiving large contracts reserving cloud infrastructure capacity because the demand for our Gen2 AI infrastructure substantially exceeds supply." Multicloud is also a revenue lever, with offerings such as Oracle Database@Azure expanding the addressable base. Oracle's number 16 position on the Top 100 list marks progress, even as overall market share remains in the low single digits.

Executive takeaways

  • Book capacity early. AI-grade GPUs are supply-constrained. Consider multi-year, multi-region reservations.
  • Use multicloud to balance risk and performance. Split training, inference, and data services across providers where it makes sense.
  • Prioritize data residency. If sovereignty rules apply, validate in-country or on-prem deployment options and audit pathways.
  • Match workload to substrate. For high-performance databases and AI training, evaluate bare-metal vs. virtualized trade-offs.
  • Scrutinize the network. RDMA and cluster design can make or break training time-and cost.
  • Model total cost, not list price. Include egress, interconnect, GPU utilization, and reserved capacity economics.
  • Plan for scale events. Ensure power, cooling, and supply commitments align with your AI roadmap.

Questions to pressure-test your plan

  • What guaranteed GPU capacity can we secure in the next 6-24 months, and at what utilization thresholds?
  • How will we meet sovereignty requirements without creating stranded capacity or operational silos?
  • Where do we need bare-metal today-and where could we shift later to optimize cost?
  • What's our fallback if one provider faces supply issues or regional constraints?
  • How do network topology and storage throughput scale as model sizes and datasets grow?

If you're upskilling leaders and teams on AI infrastructure and multicloud strategy, explore curated executive-focused learning paths here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)