Stony Brook and UB secure $13.77M NSF grant for energy-efficient AI supercomputing, expanding access nationwide

$13.77M NSF grant backs Stony Brook and UB to broaden AI compute access nationwide. Energy-efficient hardware will support students, educators, and researchers across disciplines.

Categorized in: AI News Science and Research
Published on: Sep 18, 2025
Stony Brook and UB secure $13.77M NSF grant for energy-efficient AI supercomputing, expanding access nationwide

$13.77M federal grant boosts AI research and access at Stony Brook

The U.S. National Science Foundation awarded $13.77 million to Stony Brook University's Institute for Advanced Computational Science, in collaboration with the University at Buffalo. The project-Sustainable Cyber-infrastructure for Expanding Participation-will broaden access to advanced computing and data resources for research across the country.

"This project employs a comprehensive, multilayered strategy, with regional and national elements to ensure the widest possible benefits," said Robert Harrison, Director of IACS. "The team will collaborate with multiple initiatives and projects, to reach a broad audience that spans all experience levels from high school students beginning to explore science and technology to faculty members advancing innovation through scholarship and teaching."

THE BLUEPRINT
  • Stony Brook and the University at Buffalo lead the initiative
  • Energy-efficient AI processors underpin the system
  • Expanded access for students, educators and researchers nationwide

What the award funds

The grant supports purchase and operation of a high-performance, energy-efficient computing system designed for AI inference and data-intensive science. Priority fields include those historically underrepresented in high-performance computing, such as life sciences and computational linguistics.

By opening the system to researchers, students and educators, the project aims to lower barriers to large-scale compute, enable new discoveries and help train the next generation of scientists.

Hardware choices to scale access

The new system will use AmpereOne M RISC machine processors for low-cost, energy-efficient throughput across diverse academic workloads. Qualcomm Cloud AI inference accelerators will boost efficiency and support large AI models.

This marks the first academic deployment of both technologies following their success in commercial clouds. The effort supports national goals tied to the National Artificial Intelligence Research Resource (NAIRR) and NSF's broader cyberinfrastructure strategy.

Access and performance, without steep learning curves

The IACS-led supercomputer will run mixed workloads efficiently and deliver consistent, accessible performance without requiring advanced programming skills or deep hardware knowledge. The team will work with communities that have not traditionally used high-performance computing-extending participation beyond R1 universities and beyond historically HPC-heavy disciplines.

"The University at Buffalo is excited to partner with Stony Brook on this new project that will advance research, innovation and education by expanding the nation's cyber-infrastructure to scientific disciplines that were not high performance computing-heavy prior to the AI boom, as well as expanding to non-R1 universities," said Co-principal Investigator Nikolay Simakov.

What the vendors say

"AmpereOne M delivers the performance, memory and energy footprint required for modern research workloads-helping democratize access to AI and data-driven science by lowering the barriers to large-scale compute," said Jeff Wittich, chief product officer at Ampere. "We look forward to working with Stony Brook University to integrate this platform into research and education programs, accelerating discoveries in genomics, bioinformatics and AI."

"Qualcomm Technologies is proud to contribute our expertise in high-performance, energy-efficient AI inference and scalable Qualcomm Cloud AI Inference solutions to this initiative," said Richard Lethin, vice president of engineering at Qualcomm Technologies. "Our technologies enable seamless integration into a wide range of applications, enabling researchers and students to easily leverage advanced AI capabilities."

Why this matters to researchers

  • Broader compute access: National availability for labs, educators and students-including institutions with limited HPC resources.
  • Lower operating costs: Energy-efficient processors and accelerators help contain power and budget constraints.
  • Faster time to results: Inference-first design supports AI workloads common in life sciences, computational linguistics and beyond.
  • Simpler onboarding: Useful performance without specialized programming enables quicker adoption across disciplines.

What to watch next

Expect calls for participation, onboarding resources and training opportunities as the system comes online. Labs planning AI-heavy projects-genomics, bioinformatics, NLP pipelines or large-scale inference-can prepare data workflows now to take advantage of early access windows.