NSF Awards $20M to Brown-led ARIA to Develop Safer AI Assistants for Mental Health
NSF gives $20M to Brown's ARIA to study human-AI interaction and build safe assistants for mental and behavioral health. Expect benchmarks, guardrails, and clinician-in-loop tools.

NSF backs Brown's ARIA to make AI assistants safe and useful for mental health
A $20 million National Science Foundation grant is funding the AI Research Institute on Interaction for AI Assistants (ARIA) at Brown. The institute's aim is direct: study how humans and AI interact, then build assistants that support mental and behavioral health without causing harm.
"The reason we're focusing on mental health is because this is where current AI struggles most," said Ellie Pavlick, associate professor of computer science and cognitive and psychological sciences, who will lead ARIA. High-profile failures from chatbots have raised an urgent question: What do we actually want from AI in sensitive contexts?
Why ARIA matters for researchers
ARIA sits inside the NSF's broader investment aligned with federal AI priorities. The focus isn't to move fast and ship features-it's to create scientific footing for safe, controllable assistants that "treat humans well."
According to Michael Frank, director of the Center for Computational Brain Science at the Carney Institute, the work connects cognitive science, machine learning, HCI, and policy in a more systematic way. The goal is to generate methods and evidence that clinicians, developers, and policymakers can trust.
What the institute will study
- Human learning and cognition to inform assistant behavior and instruction-following
- Human-machine interaction protocols that reduce risk and clarify boundaries of use
- Evaluation methods for mental health scenarios, including failure modes and red-teaming
- Safety, controllability, and alignment with clinical best practices
- Legal and ethical frameworks, privacy, consent, and audit requirements
The institute includes Brown faculty and students and collaborators at Carnegie Mellon University, University of New Mexico, and Dartmouth. Alongside basic science, teams will examine patient safety guidelines and legal implications of AI in care settings.
Clinical stance: augmentation, not replacement
"The goal is not to replace human therapists," Frank said. Assistants may help with triage, psychoeducation, note drafting, or tracking adherence-while keeping humans in the loop for diagnosis, treatment, and crisis care.
Roman Feiman highlighted a core challenge: many people rely on large language models without knowing their limits. ARIA's work aims to make those limits clear, measurable, and enforceable in real use.
Ethics and evidence over hype
Assistant Professor Julia Netter, who studies ethics and responsible computing, called ARIA's approach "the right one" because it centers mental health experts, not just engineers. She also cautioned that this domain touches people at their most vulnerable, so any intervention must be well-tested and carefully governed.
Pavlick noted the team proposed ARIA years ago; funding lagged amid uncertainty at NSF. With support now in place, the institute can prioritize science over short-term product wins.
What to watch for
- Benchmark suites and protocols for evaluating mental health interactions
- Clinician-in-the-loop workflows and guardrails for escalation and crisis detection
- Data governance models that protect privacy while enabling rigorous study
- Open tools or guidelines that can be adopted by health systems and developers
"AI isn't going anywhere," Frank said. The real work is to understand it and control it for good. Feiman added a hope for a shift from quick product tweaks to a scientific effort that genuinely improves lives.
Further reading
Build responsible AI skills
If your work intersects with evaluation, safety, or clinical AI, explore current training options to stay current on LLM safety, auditing, and governance: Latest AI courses.