Venado Supercomputer Deploys OpenAI o3 on Classified Network for National Security Science
Los Alamos moved Venado to a classified network, running OpenAI o3 on NVIDIA GH200 to speed mission-critical science. Focus: plutonium aging, biosecurity, grid resiliency.

Los Alamos Launches Frontier AI Models on Venado to Accelerate National Security Science
Los Alamos National Laboratory has moved its Venado supercomputer onto a classified network and is now running OpenAI's latest o3 reasoning model on NVIDIA GH200 Grace Hopper Superchips. Venado ranks 19th worldwide and serves researchers across National Nuclear Security Administration (NNSA) laboratories. The goal: shorten cycles from hypothesis to result for mission-critical science.
Why it matters for researchers
AI is now embedded in national security research, from analyzing diagnostic data and optimizing experiments to improving facility operations. Acting NNSA Administrator Teresa Robbins emphasized that the Labs are pairing top-tier compute with advanced AI to support core missions that require speed, rigor, and traceability.
What Venado is running now
Venado is among the first government systems to run OpenAI's o3 reasoning model for national security applications. Reasoning models help teams plan, critique, and iterate on complex scientific workflows. For background on the model class, see OpenAI's overview of o3 reasoning approaches: Introducing o3.
Early results and current mission focus
In its first year, Venado supported advances in materials science and design, DNA and disease research, and energy grid resiliency. Now on a secure network, priority work includes studies of plutonium aging, guardrails against biological threats, and other high-impact national security research areas.
A deeper government-industry collaboration
Los Alamos established early partnerships with NVIDIA and OpenAI, helping define how AI models integrate into high-consequence science. Laboratory Director Thom Mason noted this is the first use of OpenAI's reasoning models for national security on a government resource like Venado-setting a template for broader collaboration across Los Alamos, Lawrence Livermore, and Sandia.
OpenAI's Kevin Weil underscored the pace benefit: what once took years could compress into months when researchers interface with capable models on large-scale compute. These efforts build on ongoing work across the national labs to evaluate model reasoning and improve AI safety practices.
Under the hood: Venado's architecture
- 2,560 direct, liquid-cooled NVIDIA GH200 Grace Hopper Superchips in an exascale-class HPE Cray EX system
- 920 NVIDIA Grace CPU Superchips (each with 144 Arm cores)
- Arm-based Grace CPUs coupled with Hopper architecture accelerators for HPC and large-scale AI
- Higher instructions-per-second with improved energy profile versus prior generations
For technical context on GH200, see NVIDIA's platform page: NVIDIA Grace Hopper Superchip.
What this enables for lab teams
- Faster diagnostic analysis: automate triage, flag anomalies, and prioritize follow-ups
- Experiment optimization: generate designs, critique methods, and quantify expected gains before running
- Operational efficiency: procedural planning, shift handoffs, and checklists informed by model feedback
- Knowledge consolidation: synthesize literature, past runs, and domain notes into actionable briefs
NVIDIA's Ian Buck described Venado as a "frontier AI factory," built to simulate hard-to-observe phenomena and reason across complex systems. That combination is aimed at turning scientific intent into validated results with less iteration cost.
What's next
Los Alamos plans a next-generation AI-focused deployment in 2026 to expand capacity for basic scientific research. Meanwhile, Venado's classified deployment with OpenAI's reasoning models is expected to guide repeatable workflows for sensitive research across NNSA laboratories.
For researchers upskilling in AI
If you're building AI capability for your team, explore practitioner-led course maps by leading AI companies: AI courses by leading companies.