Science and technology: From traffic systems to trustworthy AI, ASU undergrads take on problems we can't ignore
How do you trust AI when it doesn't know what it doesn't know? How do you move systems trained in simulation into real streets without risking safety? At Arizona State University's School of Computing and Augmented Intelligence, students start with questions like these and build research that answers them with data, prototypes and measurable results.
This year, four undergraduate researchers earned honorable mentions in the 2025-26 Outstanding Undergraduate Researcher Awards from the Computing Research Association. Their projects span statistical modeling, sim-to-real robotics, uncertainty-aware AI and sustainable computing-work that translates directly to high-impact deployments.
Alec Fishbach: Why people stay or leave
Focus: Retention and engagement in large professional organizations.
Fishbach, a computer science junior, studied how members join, disengage and return to professional communities. Working with associate professor Bing Si, he analyzed survey data from more than 7,000 members of INFORMS to uncover what sustains long-term participation.
The data wasn't clean. Responses were incomplete, overlapping and messy-the kind that breaks fragile pipelines. Fishbach built the data cleaning process and the study's primary model, collaborating with Si to validate accuracy. The payoff: a clearer view of the drivers behind inclusion and retention that organizations can act on.
"Research can be a great fit if you find yourself wanting to understand how and why things work beyond what is covered in class," Fishbach says.
Khoa Vo: Teaching robots about the real world
Focus: Sim-to-real transfer for autonomous systems.
Vo, a senior in computer science, tackled one of robotics' most stubborn issues: models that perform well in simulation but fail in the physical world. In Hua Wei's Data Mining and Reinforcement Learning Lab, he helped build a safe, physical testing setup with small robotic vehicles to bridge that gap.
Vo showed that learning from expert examples in simulation improved real-world behavior-keeping vehicles in lane and avoiding obstacles more consistently. The work offers a practical path to reduce risk before field deployment. "My effort provided a user-friendly and comprehensive study, demonstrating the potential of simulation data in improving robot performance," Vo says.
Owen Krueger: When AI should say "I don't know"
Focus: Calibrated uncertainty and overconfidence in AI models.
Krueger, a computer science senior working with associate professor Giulia Pedrielli, built controlled testbeds where uncertainty levels are known ahead of time. That setup let him observe how models behave when inputs are missing, noisy or out-of-distribution-conditions that show up in real deployments.
He then tested training strategies to rein in overconfidence, aiming for systems that can flag guesses instead of asserting false certainty. "If we don't know when AI is outside familiar territory, we can't responsibly deploy it in high-risk settings," Krueger says.
Shreyas Bachiraju: Where efficient AI meets urban sustainability
Focus: Edge-friendly AI for transportation and city systems.
Bachiraju, an informatics senior mentored by Hua Wei, addressed a common blind spot: AI designed for servers that must run on low-power devices in the field. He redesigned an AI system to run faster on constrained hardware-traffic cameras, roadside sensors and compact in-vehicle computers-without sacrificing accuracy.
He also examined why some large models demand so much energy and how to reduce that cost. "In urban and edge environments, constraints aren't something you can ignore," he says. "They're the design goal."
Why undergraduate research matters
Four recognitions in one cycle signal a culture that values hands-on inquiry. As Nadya Bliss, executive director of the ASU Advanced Capabilities for National Security Institute and former chair of the CRA Computing Community Consortium, puts it: "Undergraduate research is where students first encounter the real nature of computing. They learn how to work through open-ended problems, to iterate, to fail and to adapt."
That mindset shows up in each project: build testbeds before deployment, measure uncertainty instead of assuming confidence, clean messy data instead of discarding it and design for constraints from day one.
Practical takeaways for scientists and research engineers
- Treat messy data as a feature, not a bug. Build cleaning and imputation steps that preserve signal while reducing bias.
- Close the sim-to-real gap with incremental physical testbeds. Validate behaviors on safe, small-scale hardware before field trials.
- Quantify uncertainty explicitly. Use controlled scenarios to benchmark confidence calibration and detect out-of-distribution inputs.
- Design for edge constraints early. Profile latency, memory and energy to guide model choice and compression methods.
- Validate with multiple metrics. Accuracy alone hides brittleness-track calibration, robustness and resource use.
For more methods, frameworks and tools relevant to scientists and researchers working with AI, explore AI for Science & Research.
Your membership also unlocks: