Exascale AI at ORNL Doubles Speed, Cuts Memory 75% for Hyperspectral Plant Research

ORNL's D-CHAG cuts memory up to 75% and delivers more than 2x faster hyperspectral plant imaging on Frontier. It clears bottlenecks for bigger models and faster crop insights.

Categorized in: AI News Science and Research
Published on: Jan 30, 2026
Exascale AI at ORNL Doubles Speed, Cuts Memory 75% for Hyperspectral Plant Research

AI method slashes memory use and doubles speed for plant imaging at ORNL

Researchers at Oak Ridge National Laboratory introduced a method that more than doubles processing speed while cutting memory use by up to 75% for hyperspectral plant imaging. The approach clears a major bottleneck in training foundation models on high-dimensional data from the Advanced Plant Phenotyping Laboratory (APPL), running at exascale on Frontier.

This work supports projects aligned with DOE's Genesis Mission and accelerates AI-guided discovery for resilient bioenergy and food crops. The team presented the method at SC25 in November 2025.

The bottleneck: hyperspectral data at scale

Standard cameras capture three channels. APPL's hyperspectral systems capture hundreds, each tied to a specific wavelength that reveals plant health, chemistry, and structure.

Processing all channels at once balloons memory and compute requirements. That has slowed training of advanced neural networks that could extract the most meaningful biological signals.

What D-CHAG changes

Distributed Cross-Channel Hierarchical Aggregation (D-CHAG) tackles the problem in two steps. First, distributed tokenization splits spectral channels across many GPUs, so each device handles a subset without overload.

Next, hierarchical aggregation merges information in stages across spectral regions instead of all at once. This staged merge reduces data volume per step, which lowers memory needs and speeds computation-without conceding accuracy.

  • Up to 75% lower memory than standard foundation model methods-enabling larger models on existing hardware.
  • More than 2x faster processing for training and inference on large hyperspectral datasets.
  • Validated on APPL plant data and a weather dataset on Frontier.

Why this matters for plant science

AI can now learn directly from continuous imaging to quantify traits like photosynthetic activity, replacing slow manual measurements. Early detection of stress and disease improves, and linking genes to desirable traits gets faster and more reliable.

As hyperspectral systems scale from lab to field-think drone-mounted cameras over croplands-researchers and growers can monitor health in near real time. Breeders get rapid feedback loops for selecting plants with higher yield, better water use, or stronger stress tolerance.

Inside APPL's imaging pipeline

APPL runs plants through a series of imaging stations 24/7, capturing hyperspectral, structural, and chemical signatures. The lab's throughput and variety of modalities create foundation-model-ready datasets with full spatial and spectral fidelity.

D-CHAG keeps that fidelity intact while making training feasible at scale. The result: fine-grained signals in plant physiology are easier to learn and apply across experiments and environments.

Programs and projects this enables

  • Genesis Mission (DOE): Accelerates discovery science with AI and high-performance computing to strengthen national security and drive energy innovation.
  • OPAL: The Orchestrated Platform for Autonomous Laboratories unites AI, robotics, and automated experimentation across DOE labs to create self-improving discovery systems.
  • Generative Pretrained Transformer for Genomic Photosynthesis: Uses APPL-driven foundation model advances to simulate accurate genetic modifications for higher photosynthetic efficiency and productivity.

What's next

The team is refining models to predict photosynthetic efficiency directly from images. As compute becomes more available and hyperspectral sensors spread, expect workflows that connect lab phenotyping, field monitoring, and breeding decisions end to end.

Team and support

Contributors include Aristeidis Tsaris, Larry York, John Lagergren, Xiao Wang, Isaac Lyngaas, Prasanna Balaprakash, Dan Lu, and Feiyi Wang, with Mohamed Wahib of the RIKEN Center for Computational Science. Support came from the Center for Bioenergy Innovation (DOE BER), and ORNL laboratory-directed research and development.

For practitioners

If you're building AI workflows or upskilling teams for foundation-model projects, see practical course paths by role at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide