Husker Scientist Advancing Speed, Intelligence and Efficiency in Data Networks
February 15, 2026 | Media Release
A University of Nebraska-Lincoln computing professor is leading three projects that push data networks to be faster, smarter and more efficient. The work sits at the intersection of AI and next-generation connectivity, backed by the National Science Foundation and the U.S. Department of Energy.
The common thread: making sense of the massive data streams flowing through research backbones and scientific facilities. Two projects apply AI and machine learning to high-volume network logs; a third explores future-ready optical systems.
Three projects, one challenge: data at scale
- AI for routing intelligence (NSF): Applying machine learning to routing logs from Internet2 to classify traffic, detect anomalies and map how data traverses large backbones. The goal is actionable insight without manual sifting.
- Predictive caching for science (DOE): Analyzing caching logs from the Open Science Grid to forecast which files will be requested next, then prefetching to minimize waits. This directly supports high-energy physics workflows fed by the Large Hadron Collider.
- Optical network advances (NSF): Exploring technologies that make fiber systems faster, more energy-efficient and more cost-effective. As global data needs climb, energy per bit and operational simplicity become core performance metrics.
Why this matters for research teams
High-speed networks are foundational for AI workloads. Distributed training, cross-site inference and data-intensive experiments all depend on predictable, high-bandwidth paths and smart data placement.
Large network infrastructures at universities and labs generate logs on a scale that demands machine learning. Pattern discovery, anomaly detection and forecasting are now table stakes for stable, efficient operations.
"AI is a good tool," said Byrav Ramamurthy, professor of computing at Nebraska and principal investigator. "Sometimes people have apprehension about the role of AI, and I do have some concerns myself, but there are some things, especially large data, big data, for which there's no other way. Humans cannot analyze the large traffic data from routers.
"It is really amazing what AI can do with large volumes of data."
High-speed connectivity enables these capabilities, Ramamurthy said. As AI models grow and scientific facilities produce larger datasets, advances in both wired and wireless systems will be essential for the next wave of discovery.
What you can apply now
- Instrument your network: Centralize routing, flow and caching logs. Standardize schemas and timestamps to reduce data wrangling overhead.
- Start with clear targets: Focus ML on two high-value tasks first: anomaly detection (stability) and demand prediction (throughput/latency).
- Prefetch with intent: Use request histories and workload calendars to stage data near compute. Track hit ratios and queue times to validate gains.
- Measure energy per bit: For optical upgrades, evaluate throughput, latency, failure domains and watts/GB. Cost/performance is now multi-dimensional.
Team and collaborators
Principal investigator: Byrav Ramamurthy, professor of computing at the University of Nebraska-Lincoln.
Collaborators include doctoral students Sarat Barla, MAU Shariff and Srikar Chanamolu; researchers from the Indian Institute of Technology Madras on the optical network project; and Derek Weitzel, research associate professor of computing at Nebraska, on the Open Science Grid research. The Holland Computing Center is also involved.
For researchers leveling up AI workflows
If you're building skills in data analysis or ML operations for research networks, explore focused training here: AI Certification for Data Analysis.
Your membership also unlocks: