Opening the 'black box' of AI: UMaine project puts interpretability first
AI can flag tumors, sort photos, and draft reports in seconds. The problem: most systems won't tell you how they reached a decision. Chaofan Chen, assistant professor of electrical and computer engineering at the University of Maine, is building models that show their work and learn directly from user feedback.
Backed by a five-year, $584,034 National Science Foundation CAREER award, Chen's project - "Opening the Black Box: Advancing Interpretable Machine Learning for Computer Vision" - runs through June 30, 2030. The focus is practical: transparent computer-vision systems that are easier to audit, correct, and trust.
Why this matters
High-accuracy models still make silent errors. In health care, public safety, and research, that opacity can carry real consequences because you can't judge the evidence behind a prediction - or fix it - if you can't see it.
"We live in an exciting era of AI breakthroughs, and my mission is to create systems that don't just give answers but reveal their reasoning and can improve themselves based on human feedback," Chen said. He added, "In high-stakes settings, black-box AI isn't just a mystery - it's a risk."
What the research delivers
- Visible reasoning in vision models: Multimodal explanations that pair predictions with human-readable evidence, not just heatmaps.
- Editable reasoning: Users can correct a model's chain of thought (e.g., "this feature is irrelevant"), and the system updates its logic and parameters accordingly.
- Transparent generative pipelines: Models that explain how an image was constructed step by step, rather than only outputting a final result.
- Interpretable decision policies: Reinforcement learning methods that keep decision paths legible under training and deployment.
How the two-way loop works
Chen's team is building tools that surface the model's evidence and intermediate steps - concepts, prototypes, and counterfactuals - so users can see why a prediction was made. When something looks off, people can correct the rationale, not just the label. That feedback becomes training signal, tightening the model's behavior over time.
What this means for researchers and practitioners
If you work with computer vision in clinical imaging, lab automation, geospatial analysis, or public safety, interpretability isn't a nice-to-have. It shortens error analysis, exposes dataset shortcuts, and makes audits defensible.
- Define "acceptable evidence" per task (e.g., anatomical regions or physical features) and require models to cite it.
- Log rationales alongside predictions for downstream review, replication, and regulatory documentation.
- Build human-in-the-loop correction into your annotation workflow so you fix failure modes at the source.
- Evaluate explanations with tests that catch spurious cues, distribution shifts, and fragile reasoning.
Education and workforce impact
A key piece of the award brings interpretable AI into Maine classrooms. Chen will partner with the Maine Mathematics and Science Alliance to create high school lessons on responsible, explainable systems - giving students a clear picture of how AI decides and how to question it.
"Dr. Chen's CAREER project tackles one of AI's most urgent challenges, opening the black box so computer-vision systems explain their decisions in ways people can trust, especially in high-stakes settings," said Yifend Zhu, professor and chair of UMaine's Department of Electrical and Computer Engineering.
Funding and timeline
The project is supported by the NSF CAREER program and jointly funded by NSF RI and EPSCoR programs. Work continues through June 30, 2030.
Go deeper
- NIST AI Risk Management Framework - guidance on trustworthy AI, including explainability and transparency.
- NSF CAREER Program - details on the award supporting this research direction.
Building team capability
If your group is standing up interpretable AI workflows or retraining staff, curated learning paths can speed things up. See role-based options here: Complete AI Training - Courses by Job.
Your membership also unlocks: