AI in Mental Health: Big Promise, Real Risks, and Rules to Keep Care Human

AI is moving into mental health faster than policy can keep up. Experts say we need clear standards, privacy, equity, and clinicians in charge-especially for high-risk cases.

Categorized in: AI News Healthcare
Published on: Mar 01, 2026
AI in Mental Health: Big Promise, Real Risks, and Rules to Keep Care Human

Experts Weigh Opportunities and Risks of AI in Mental Health Care

Published: Feb. 28, 2026

A new publication from the American Academy of Arts and Sciences and a launch panel brought clinicians, researchers, and innovators together to assess how AI is being used across mental health services. The message was clear: adoption is outpacing policy, and the field needs sharper definitions, clearer expectations, and oversight that reflects what clinicians and patients already experience.

The Academy began this work in fall 2023 and released "AI and Mental Health Care: Issues, Challenges, and Opportunities" on December 9, 2025. The discussion focused on effectiveness, safety, privacy, and equity-especially for high-risk populations.

Why it matters

Trust and safety sit at the center of mental health care. As AI tools enter intake, screening, triage, and ongoing support, the stakes rise for data privacy, bias, and unintended harm-particularly for people with severe mental illness or limited access to care.

The goal is not to replace clinicians. It is to support human judgment, expand access responsibly, and make care more consistent and responsive.

Where AI is showing up today

  • Clinicians are testing AI for screening, risk stratification, triage, documentation, and between-session support.
  • The public is using general-purpose chatbots for mental health support, often outside clinical oversight.

Key distinctions to keep in view

  • Purpose-built therapeutic AI vs. general chatbots: These are different categories with different evidence needs, regulatory considerations, and guardrails.
  • Augment, don't replace: Human clinicians remain accountable for diagnosis, risk decisions, and care planning.
  • Risk tiers matter: Higher-risk use cases (e.g., crisis response, severe mental illness) require stricter evaluation, escalation paths, and continuous monitoring.
  • Evidence before scale: Prospective evaluation, bias testing, and equity impact reviews should be standard, not optional.

Who is leading the Academy's effort

  • Paul Dagum - Founder and former CEO of Mindstrong; co-chair of the Academy's project on AI and mental health care.
  • Sherry Glied - Co-chair of the Academy's project on AI and mental health care; professor at New York University.
  • Alan Leshner - Co-chair of the Academy's project on AI and mental health care; former CEO of the American Association for the Advancement of Science.
  • Kacie Kelly - Chief Innovation Officer at the Meadows Institute; member of the project's steering committee.
  • Arthur Kleinman - Psychiatrist and professor of anthropology at Harvard University; member of the project's steering committee.

What they're saying

  • "There's tremendous promise, but the concerns are real." - Paul Dagum (American Academy of Arts and Sciences)
  • "Humans are essential." - Arthur Kleinman (American Academy of Arts and Sciences)
  • "General-purpose AI chatbots are different from AI designed to deliver therapy." - Kacie Kelly (American Academy of Arts and Sciences)

Practical steps for health systems and clinics

  • Define the use case: Be specific about the clinical problem, target population, and setting. Avoid "do-everything" tools.
  • Set risk tiers and guardrails: Map which tasks are low, medium, or high risk. Require human review for medium/high risk.
  • Build escalation protocols: For crisis signals (e.g., self-harm risk), route to a human within defined timeframes.
  • Run prospective evaluations: Track clinical outcomes, false positives/negatives, clinician workload, and patient experience before wide rollout.
  • Address equity and bias: Test performance across demographics. Document mitigations and retest after updates.
  • Protect privacy and security: Limit data collection, encrypt in transit and at rest, and confirm vendor safeguards and data use terms.
  • Clarify consent and transparency: Tell patients when AI is involved, what it does, and how data is used. Offer an opt-out when feasible.
  • Integrate with clinical workflows: Keep humans in the loop, minimize clicks, and make AI outputs auditable.
  • Create governance and monitoring: Assign owners for model performance, drift detection, incident response, and vendor updates.
  • Vendor due diligence: Ask for intended use, evaluation methods, known failure modes, update cadence, and post-market surveillance plans.

For training, examples, and checklists on clinical AI workflows, see AI for Healthcare.

What's next

The Academy's publication offers a way to separate what is known from what is still uncertain, and it outlines the kinds of evidence needed for responsible integration. Expect continued focus on evaluation standards, clearer definitions, and policy that meets current practice.

Learn more about the Academy's work at American Academy of Arts & Sciences.

The takeaway

AI use in mental health care is growing faster than guidance. The field needs precise definitions, measurable expectations, and oversight grounded in clinical reality. With the right safeguards, AI can extend access and consistency-while keeping humans at the center of care.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)