OpenAI and Oracle Plan Five New U.S. Data Centers as Stargate Reaches $400B and 7 GW Capacity

OpenAI and Oracle's Stargate adds five U.S. data center sites, lifting plans to ~7 GW and $400B, aiming for 10 GW/$500B by 2028. New sites in TX, NM, OH, plus a Midwest TBD.

Categorized in: AI News IT and Development
Published on: Sep 25, 2025
OpenAI and Oracle Plan Five New U.S. Data Centers as Stargate Reaches $400B and 7 GW Capacity

OpenAI-Oracle "Stargate" adds five U.S. data center sites, closing in on 10 GW target

OpenAI and Oracle plan five new U.S. data center sites under the "Stargate" program, in partnership with SoftBank. The build-out totals roughly 7 GW of planned capacity and $400 billion in investment over the next three years, moving toward a 10 GW and $500 billion target by 2028.

Backed by President Donald Trump, Stargate signals a larger shift in U.S. compute supply and regional grid demand. Sam Altman said the required compute build-out is essential for AI to deliver broad benefits and future breakthroughs.

The essentials

  • Planned investment: $400B (next three years); target: $500B by 2028.
  • Capacity: ~7 GW planned; program goal: 10 GW.
  • Sites selected from 300+ proposals; 25,000+ jobs expected.
  • New U.S. additions are among the first since the January White House reveal.
  • AI data centers could use about 12% of U.S. electricity by 2028 (Lawrence Berkeley National Laboratory).

Where the sites are

  • Shackelford County, Texas (about 157 miles west of Dallas).
  • Doña Ana County, New Mexico (roughly 230 miles south of Albuquerque).
  • Midwest location (TBD).
  • Lordstown, Ohio (ground broken; expected online next year).
  • Milam County, Texas (supported by SB Energy, a SoftBank subsidiary).

These join an operational Stargate site in Abilene, Texas. Oracle and OpenAI's July agreement covers more than $300B in data center spend over five years. Combined, the new complexes are expected to add more than 5.5 GW of data center electrical capacity-over twice what San Francisco needs for citywide electricity use.

Stargate has not fully detailed energy sources for the new sites; Abilene currently runs on natural gas. More locations could be announced soon.

Policy and grid context

The Trump administration advanced an AI action plan in July and a Department of Energy proposal for new data center sites at national labs. Both signal federal interest in accelerating compute deployment and grid integration.

For energy teams, the mix of gas, nuclear, and renewables will define emissions, pricing, and uptime strategies. Expect stronger coordination with regional transmission operators, new peaker and storage assets, and demand-response programs.

Reference: U.S. DOE AI initiatives (energy.gov).

How this affects engineering and IT teams

  • Capacity pipeline: 7-10 GW implies more GPU/TPU availability across multiple regions. Plan for staged quota increases, not overnight relief.
  • Latency and placement: Texas, New Mexico, Ohio, and a future Midwest site shift optimal regions for inference APIs, data gravity, and edge aggregation.
  • Multi-cloud design: With Oracle in the mix, revisit cross-cloud peering, identity, observability, and cost controls for hybrid workloads.
  • Reliability: Treat each site's grid profile differently. Design for regional failover, energy-aware scheduling, and flexible SLOs.
  • Sustainability reporting: Track carbon intensity by region and hour. Tie training windows to lower-emissions intervals where feasible.
  • Networking: Expect heavy east-west traffic. Prioritize private backbone, NUMA-aware cluster placement, and congestion control tuning.
  • Procurement: Secure GPU allocations, storage tiers, and high-throughput interconnects early; align contracts with expected site timelines.
  • Data governance: Check state-level data residency, export controls, and incident response rules before migration.

What else is moving

  • Microsoft is building a $4B data center in Wisconsin and partnering with Constellation Energy to restart the Three Mile Island reactor in Pennsylvania.
  • Meta is developing an AI complex in Louisiana with an electric load on a Manhattan-like scale.
  • Amazon plans $20B for AI sites in Pennsylvania.
  • Nvidia announced a $100B investment in OpenAI and will supply chips to support the expansion.

Action checklist for technical leaders

  • Map critical services to the closest announced regions to cut latency and egress.
  • Adopt workload portability (containers, IaC, policy-as-code) to shift across Oracle, existing clouds, and on-prem as capacity comes online.
  • Implement energy-aware training and inference windows aligned with grid conditions and SLA tolerance.
  • Refactor models for hardware diversity (H100, B200, Grace Hopper, MI300, CPU-only fallbacks) to avoid allocation stalls.
  • Pre-negotiate network and storage throughput ceilings; validate with synthetic load before go-live.
  • Quantify total cost of AI (compute + network + storage + carbon) per product feature to guide capacity requests.

Stay current

Skill up on vendor ecosystems, role-based paths, and tooling updates as this build-out proceeds: AI training by job role.