OpenAI's Call to Governments: Build Your Own AI Infrastructure
OpenAI's leader urged governments to invest directly in public AI infrastructure. The message: own the compute, capture the upside, and avoid propping up private companies that make bad bets.
Why it matters for public sector leaders: demand for compute is outpacing supply, constraining product rollouts and research. If your country or state wants to compete in defense, health, education, science, and industry, access to large-scale compute will decide how far you can go.
What was actually proposed
OpenAI clarified it does not want loan guarantees or bailouts for its own data centers. "We do not have or want government guarantees for OpenAI datacenters," the company's leader wrote.
The proposal centers on governments funding and owning national AI infrastructure. In that model, the public sector bears the capital risk-and keeps the financial and strategic gains from the asset.
The numbers that set the context
- Over $20 billion in annualized revenue expected this year, according to the company.
- About $1.4 trillion in infrastructure spending commitments over the next eight years under consideration.
- A $300 billion partnership with Oracle and a $500 billion "Stargate" project with Oracle and SoftBank announced at the White House in January.
- Projected revenue growth to hundreds of billions by 2030, tied to consumer devices, robotics, and AI-driven scientific discovery.
Behind the push: severe compute constraints are already forcing delayed features and throttled access across the sector. The stated view is that the risk of too little compute is higher than the risk of building ahead of demand.
Why this matters for government teams
AI now sits alongside energy, water, transit, and broadband as foundational infrastructure. Without adequate compute, national AI strategy will stall, and public missions-health analytics, cyber defense, climate modeling, education, justice-will lag.
Owning critical compute can hedge against vendor lock-in, price spikes, and export controls. It also lets governments decide how capacity is prioritized between public interest workloads and commercial tenants.
What a public "compute reserve" could look like
- Publicly owned capacity with a charter: national security, science, education, and SME access.
- Tiered allocation: guaranteed capacity for public workloads; competitively priced capacity for industry; discounted access for research and startups.
- Open, neutral access: standard APIs, multi-tenant design, and transparent scheduling to avoid favoritism.
- Energy-aware siting: colocate with abundant clean generation, recycled heat, and water stewardship.
- Domestic supply chain focus: diversify GPUs/accelerators, encourage packaging and fabrication onshore or with trusted allies.
Policy options to consider now
- Capacity contracts: Pre-commit to multi-year take-or-pay agreements for compute. This derisks private build-outs without guarantees or bailouts.
- Public-private build-own-operate: Government funds and owns the core asset; private partners design, build, and run it under strict SLAs.
- Regional hubs: Spread facilities to reduce grid stress, build local talent, and improve resiliency.
- Interoperability mandates: Require standard runtimes, data formats, and portability across providers.
- Open research access: Reserve a slice for universities and national labs with clear safety and security protocols.
Financing without picking winners
- Green bonds and infrastructure funds: Treat compute like energy or transit assets, with transparent returns and public oversight.
- Accelerator-neutral tax credits: Incentivize capacity regardless of vendor, contingent on performance and openness.
- Capacity auctions: Award offtake contracts to providers meeting price, reliability, and security thresholds-no single-company favors.
- Sovereign or state funds: Equity stakes in shared facilities, not in private corporate entities.
Governance, safety, and security
- Safety gates: Independent red-teaming for models trained on public capacity; require safety reports before deployment.
- Data protections: Strong controls for sensitive government and citizen data; clear separation from commercial tenants.
- Auditable operations: Real-time telemetry, carbon reporting, and uptime metrics available to oversight bodies.
- National and allied coordination: Align on export controls, incident response, and standards to avoid fragmentation.
What this means for procurement officers and CIOs
Set requirements that prevent lock-in: model portability, standardized runtimes, and transparent pricing. Ask for concrete delivery schedules, supply assurances, and replacement pathways if hardware is delayed.
Tie contracts to energy efficiency and emissions intensity. Require independent performance benchmarks that reflect your workloads, not vendor-chosen tests.
Key questions to put on the table
- What share of national compute should be publicly owned vs. contracted from the market?
- How do we prioritize public missions during demand spikes?
- What is the plan if export restrictions or supply shocks limit access to top-tier accelerators?
- How will we train and retain the operators who can keep this infrastructure reliable and secure?
- What is the five-year path to lower cost-per-token and lower emissions per inference?
Context on recent criticism-and the clarified stance
Following public pushback on comments from the company's CFO about potential U.S. loan guarantees, the executive retracted the idea, calling it a clumsy explanation. The company's leader reinforced the point: governments should not pick winners or bail out firms that miss the mark.
That clarity matters for policymakers. It keeps the door open for state-owned compute while avoiding backstops for any single vendor.
Practical next steps for government teams
- Commission a 10-year national compute plan covering capacity, locations, grid integration, and workforce.
- Launch a pilot "compute reserve" with transparent eligibility and evaluation criteria.
- Adopt a unified framework for model risk management across agencies and vendors.
- Create a procurement playbook for capacity contracts, including standard SLAs and audit rights.
Further reading and resources
- U.S. National AI Research Resource (NAIRR) Pilot - a working example of public access to compute for research.
- Complete AI Training: Courses by Job - upskilling paths for public sector roles building and buying AI systems.
Bottom line
Compute is now strategic infrastructure. If governments build and own a share of it-with clear rules, fair access, and strong safeguards-they can protect the public interest while accelerating science, services, and industry.
The worst outcome isn't overbuilding. It's leaving essential public missions waiting in line for capacity you don't control.
Your membership also unlocks: