LLNL Shines at SC25: El Capitan Stays No. 1, Gordon Bell Prize, and Exascale AI

LLNL stole the show at SC25-El Capitan kept the top spot and a Gordon Bell win delivered real-time tsunami forecasts. Big strides in protein folding, trust and DOE partnerships.

Categorized in: AI News Science and Research
Published on: Dec 05, 2025
LLNL Shines at SC25: El Capitan Stays No. 1, Gordon Bell Prize, and Exascale AI

LLNL shines at SC25: exascale leadership, real-time science, and AI that accelerates discovery

SC25 delivered volume and substance: 16,000+ attendees, nearly 560 exhibitors, and one of the strongest technical lineups to date. Lawrence Livermore National Laboratory (LLNL) was everywhere-on stage, on the Top500, and in the sessions pushing HPC and AI forward.

Under conference general chair Lori Diachin, LLNL's imprint was clear: ambitious systems, rigorous software, and results that matter to practice. Here's what researchers should take away.

El Capitan holds No. 1 worldwide-again

LLNL's El Capitan retained the top spot on the Top500, posting 1.809 exaFLOPs on HPL. It also stayed No. 1 on HPCG and HPL-MxP, marking a rare trifecta for a second straight list.

The system continues to scale across traditional simulations, memory-heavy workloads, and AI-driven computation with standout energy efficiency. Built through LLNL-HPE-AMD collaboration and funded by NNSA's Advanced Simulation and Computing program, El Capitan is backed by a mature software stack for management, numerics, scheduling, performance engineering, and large-scale AI workflows.

Gordon Bell Prize: real-time tsunami forecasting at exascale

LLNL, the Oden Institute (UT Austin), and Scripps Institution of Oceanography won the ACM Gordon Bell Prize for physics-based, real-time tsunami forecasting on El Capitan. Using LLNL's MFEM finite element library, the team turned deep-ocean pressure data into localized predictions in under 0.2 seconds-roughly 10 billion times faster than conventional methods-supporting rapid alerts and fewer false alarms.

"We're really excited to win the Gordon Bell Prize," said LLNL's Tzanio Kolev. "We can't wait to apply El Capitan and finite-element algorithms to more applications." Their tsunami visualizations also appeared in SC25's Art of HPC exhibit.

More science wins: rocket exhaust, awards, and LLM interpretability

El Capitan powered a second Gordon Bell finalist: a record-scale, fully coupled fluid-chemistry simulation of rocket exhaust plumes at unprecedented resolution. LLNL also received HPCwire Readers' and Editors' Choice honors and a 2025 Hyperion Research HPC Innovation Excellence Award for system leadership and real-world impact.

On the AI side, an LLNL team led by Harshitha Menon was a Best Poster finalist for work on interpretability of large language models for HPC code. The focus: whether models truly grasp parallelism, concurrency, and correctness-and how to raise trust and verifiability for high-stakes scientific software.

AI-accelerated science on frontier systems: ElMerFold and OpenFold3

LLNL's Nikoli Dryden detailed ElMerFold: an exascale-driven protein-folding workflow that generated over 2,400 structures per second on El Capitan. The team shrank an eight-day computation to about 11 hours by pairing optimized training and inference with AMD APU unified CPU-GPU memory, node-local storage via HPE Rabbits, and a scalable ML pipeline.

This infrastructure also produced large-scale distillation data for the newly released OpenFold3 model. As Dryden put it, the goal is straightforward: build AI-for-science models that fully exploit frontier hardware for molecular modeling and predictive biology.

DOE signals unified direction for HPC, AI, and quantum

DOE Undersecretary for Science Dario Gil called for a nationally coordinated effort that brings DOE, industry, and academia into a shared agenda. He emphasized DOE's unique strengths-decades of scientific data, facilities, and mission computing-and the central role of national labs as AI scales.

The message to researchers: partnerships, co-investment, and common goals will set the pace for progress across compute, data, and models.

Leadership, outreach, and community building

Despite a 43-day federal government shutdown, LLNL maintained a strong SC25 presence: tutorials on large-scale workflows, AI-accelerated science, exascale performance, and system software; plus sessions on agentic AI, fusion science, GPU optimization, numerical methods, scientific ML, and open source. On the floor, LLNL demonstrated agentic AI for fusion research and Flux at the DOE booth.

Through Students@SC, LLNL staff mentored early-career researchers with practical guidance on communication, problem-solving, and translating coursework into impact. As Computing Workforce Manager Marisol Gamboa told attendees, "Don't measure yourself by a failure, measure yourself by your recovery rate."

As SC25 general chair, Lori Diachin led the "HPC Ignites" theme, expanded outreach, grew the Art of HPC exhibit, and steered the event through disruptions. "I am proud of my team of volunteers who just knocked it out of the park," she said, underscoring that SC is as much about building durable collaborations as it is about systems and papers.

Why this matters for research teams

  • Exascale isn't theoretical-El Capitan is delivering performance across simulations, memory-heavy workloads, and AI with strong efficiency.
  • Physics-informed AI is ready for time-critical problems. The tsunami work shows real-time inference is feasible without sacrificing rigor.
  • Software maturity is now a competitive edge. Libraries, scheduling, and data pipelines determine whether teams hit scale.
  • Trust in AI for HPC code matters. Interpretability and verification will decide whether models move from demos to production.

What to watch next

  • Wider deployment of real-time scientific digital twins (earth systems, plasma, materials).
  • Frontier-scale training workflows that fuse simulation data with experimental streams.
  • Standard practices for LLM interpretability in parallel scientific software.
  • DOE-led collaborations that align compute, data access, and workforce development.

SC25 made one point clear: the path forward blends exascale systems, principled AI, and the people who can make them work together. LLNL showed what that looks like in practice.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide