Study Theorizes Generative AI Could Improve Mental Health Care for Diverse Populations
Date: November 17, 2025
Cortney VanHook, assistant professor of social work at the University of Illinois, and co-authors tested how generative AI could support clinical planning and training. Their study built a personalized treatment plan for a fictitious client, "Marcus Johnson," a 24-year-old Black man in Atlanta who works as a software developer.
The goal was simple: see if AI can help clinicians and students think more clearly, move faster from intake to plan, and keep cultural context front and center.
What the team did
The authors asked an AI platform to apply three evidence-based theoretical frameworks to a simulated case. After they supplied details about Marcus' lifestyle, family, and symptoms, the AI outlined factors that could help or hinder his use of care, assessed his access to services, and proposed measures to track symptoms and functioning over time.
To ground the output in real practice, the authors reviewed the plan for clinical accuracy and checked it against published research. They also examined the case for cultural sensitivity and whether it reflected barriers Black men often face in the U.S. healthcare system.
Why this matters for healthcare teams
- Education and training: Create realistic case briefs for students who may have limited exposure to certain populations.
- Faster iteration: Generate draft plans, risk factors, and monitoring frameworks to refine in supervision.
- Structured thinking: Surface social, cultural, and systemic factors that are easy to miss in busy settings.
- Consistency: Standardize case materials across cohorts or sites while keeping cultural nuance in view.
Limits you should respect
The authors are clear: AI reflects the data and patterns it was trained on. It can miss nuance, fail to capture lived experience, and should not be treated as a substitute for clinical judgment or community insight.
Use it as an adjunct. Human review remains essential, especially for culturally responsive care and safety-sensitive decisions.
Practical steps to pilot this in your setting
- Start in education or supervision: Use AI-generated cases for discussion, then compare with established guidelines and literature.
- Set guardrails: No real PHI in prompts. Keep an audit trail of prompts and outputs. Document final clinical reasoning separately.
- Bias checks: Review outputs for stereotypes or gaps. Invite feedback from colleagues with relevant cultural expertise.
- Localize resources: Ask the model to consider geography, insurance, transportation, and community supports, then verify accuracy.
- Measure what matters: Align AI-suggested measures with validated tools you already use and track change over time.
What they said
"AI is a train that's already in motion, and it's picking up speed," said Dr. VanHook. "So, the question is: How can we use this amazing tool to improve mental health care for many populations? My hope is that it is used in the field, as a tool for teaching and within higher-order management and administration when it comes to mental health services."
The study also credits co-authors Daniel Abusuampeh (University of Pittsburgh) and Jordan Pollard (University of Cincinnati).
Where this fits in the broader picture
For teams focused on equity, this approach can help illuminate barriers-cost, stigma, access, and provider bias-before a client ever sits down for intake. It won't solve those barriers on its own, but it can make planning more structured and transparent.
For context on mental health disparities affecting Black communities, see the U.S. Office of Minority Health's overview of mental and behavioral health in African Americans.
Want to upskill your team on AI in practice?
If you're building faculty or staff capabilities, explore role-based AI learning paths at Complete AI Training. Start small, set clear guardrails, and keep people at the center of care.
Your membership also unlocks: