Brown's $20M ARIA Institute Charts a Path to Trustworthy AI Assistants for Mental Health

Brown launches NSF-backed ARIA to build trustworthy, context-aware AI assistants, with a focus on mental and behavioral health. Open, cross-disciplinary collaboration sets the tone.

Categorized in: AI News Science and Research
Published on: Nov 22, 2025
Brown's $20M ARIA Institute Charts a Path to Trustworthy AI Assistants for Mental Health

Brown launches NSF-backed ARIA to build trustworthy, context-aware AI assistants

Researchers from across computer science, psychology, neuroscience and related fields met at Brown University on Nov. 20-21 to kick off the National Science Foundation-funded AI Research Institute on Interaction for AI Assistants (ARIA). The five-year, $20 million effort will focus on AI assistants that can interact with people in ways that are safe, sensitive and grounded in real context.

Led by Ellie Pavlick, associate professor of computer science at Brown, the multi-institution team will direct a significant share of its work at mental and behavioral health - a fast-growing area for AI use where trust and safety aren't optional. The launch meeting set the tone: ambitious, cross-disciplinary, and openly collaborative.

Why ARIA exists

The core bet is clear: to be useful in high-stakes settings, AI assistants must read context, adapt to the person in front of them and make their reasoning legible. That's especially true in care environments, where missteps carry real risk.

Pavlick summarized the planning challenge well: "With a group like this addressing a problem that's so enormous, there are probably 40 different research themes that we could choose. We wanted to start thinking about which five or so themes are ones that best build on our research strengths and are also really important to the problem."

Kickoff: collaboration and open science

Day one centered on structured brainstorming: scoping questions, methods and evaluation plans the team can drive over the next five years. Day two opened the doors to the broader research community in Brown's Engineering Research Center, signaling that ARIA won't be a closed shop.

As Pavlick put it, the institute won't "go back to our labs and let people know in five years what we came up with." The goal is to share plans early, invite feedback and build with stakeholders from the start.

Scientific inputs from psychology and psychiatry

Julian Jara-Ettinger (Yale) presented work on human social intelligence - how people infer the goals, beliefs and preferences of others and adjust behavior in response. Insights from that science are essential for building AI systems that can interact with people in a way that feels natural and safe.

Nicole Nugent (Brown), a clinical psychologist, responded with a clinician's lens. "One of the things that I'm so excited about for your center is that you have built, very intentionally, these interdisciplinary connections," she said. "You have… cognitive scientists, you have computer scientists, you have mental health folks. I would encourage everyone here today to think about how you can take this interdisciplinary approach and keep moving it forward."

Initial research themes

  • Interpretability: making it clear how systems arrive at responses, so developers, clinicians and patients can audit and trust behavior.
  • Adaptability: tuning interaction policies to different users, contexts and cultural norms without losing safety.
  • Participatory design: involving clinicians, patients and other stakeholders early to shape requirements, interfaces and guardrails.
  • Trustworthiness metrics: exploring a Consumer Reports-style rating for AI used in mental health, with transparent criteria and reproducible tests.

What progress could look like

  • Shared benchmarks that test context-awareness, safety and helpfulness across diverse populations.
  • Open protocols for clinician-in-the-loop data collection and evaluation.
  • Clear reporting standards for failure modes, uncertainty and off-policy behavior.
  • Practical guidance for deploying assistants in clinical and community settings.

Institutional backing

Brown Provost Francis J. Doyle framed the institute's role succinctly: "To be successful, institutes like ARIA really need to be nexus points for scientific innovation - building collaborations on and off campus, and facilitating connections between researchers and practitioners alike, both within and outside the academy. Brown could not be more proud - and prepared - to do this work, building on our great strengths… in interdisciplinary collaboration and transcending boundaries."

For researchers: how to plug in

  • Track NSF's AI Institutes program for updates, opportunities and related calls for participation. NSF AI Institutes overview
  • Upskill teams on prompt design and evaluation methods to support interpretability and safety workstreams. A focused starter set: Prompt engineering resources

ARIA brings the right mix: technical depth, clinical perspective and a commitment to open collaboration. If your work touches conversational AI, human-computer interaction or mental health, now is the time to connect and contribute.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)