NSF CAREER Awards Back Six BU Researchers Advancing Trustworthy AI, Algorithms, and Safer Robots
The National Science Foundation has recognized six Boston University researchers with CAREER awards-supporting projects that make AI more transparent, robots safer, data systems more usable, and algorithms more practical for hard problems.
Each project blends technical depth with clear societal value. Funding will also support student researchers, growing a pipeline of talent across computing and engineering.
Trustworthy Medical AI: From Black Box to Clear Reasoning
Kayhan Batmanghelich (Electrical and Computer Engineering) is building methods to translate opaque medical AI into human-understandable logic-rules, simple programs, and explanations clinicians can audit. His team will also use AI to evaluate other AI systems, focusing on breast cancer and chronic lung disease models.
Impact: clearer decision pathways, easier error detection, and stronger foundations for equitable diagnosis. For clinical AI teams, this signals a shift: explanations and auditability sit next to accuracy as must-haves.
Fairer Allocation of Social Services: Mechanism Design Without Hidden Costs
Kira Goldner (Computing & Data Sciences) studies the math behind "ordeals" like paperwork and long waits that ration access to public benefits. Her work analyzes when to use rationing vs. verification and seeks simple, explainable mechanisms with guarantees.
Outcome: policies that reduce burden on people in need, improve equity, and preserve program integrity. Expect actionable guidance on where to require proof and where to simplify.
Streaming Analytics for Everyone: Efficient Systems at Human Scale
Vasiliki Kalavri (Computer Science) develops software systems that help nonexperts analyze continuous data from wearables, phones, vehicles, and sensors. The goal: efficient, scalable, secure analytics that researchers and small organizations can actually use.
Applications include smart cities, digital health, Earth observation, and disease surveillance. The team will also build training and internships to help scientists turn raw streams into decisions.
Approximation Algorithms for NP-Hard Problems: Practical Guarantees
Nathan Klein (Computer Science) targets tighter guarantees for classic NP-hard graph problems, including the traveling salesperson problem. Exact solutions are often infeasible; approximation algorithms provide solutions provably close to optimal, fast.
Why it matters: logistics routing, airline scheduling, and circuit design rely on these methods every day. Better bounds and techniques translate into cost savings and more reliable planning.
Safer Robots: Motion Planning on Custom Compute
Sabrina Neuman (Computer Science) will use real-world robot constraints-limb size, geometry, motion limits-to automatically design specialized computing that plans motion quickly and safely. The focus is on speeding up heavy planning workloads while improving energy efficiency.
Result: more reliable human-robot interaction in homes, factories, and healthcare. The approach embraces hardware-software co-design to meet stringent safety and latency needs.
Accessible AI for Low Vision: Learn Directly from Users
Eshed Ohn-Bar (Electrical and Computer Engineering) will build datasets from real interactions and preferences of people with impaired vision. His team will train AI systems that learn from user feedback, fixing issues like wrong directions, vague descriptions, and ignored corrections.
Vision: AI that adapts to individual needs as a default setting. Expect broader standards for personalization and accessibility that carry over to phones, wearables, and smart city systems.
Why this matters for science and research leaders
- Interpretability moves from a nice-to-have to a requirement in clinical AI.
- Mechanism design can cut administrative burden while preserving fairness.
- Streaming systems must be usable by nonexperts, not just specialists.
- Approximation beats intractability-guarantees matter more than perfect answers.
- Hardware-aware planning enables safer, faster robots in human spaces.
- User-in-the-loop datasets produce assistive AI that earns trust.
If you're upskilling teams on explainable, accessible, or safety-critical AI, you can browse curated learning paths by job function here: AI Courses by Job.
Your membership also unlocks: