Built for engagement, not care: GPT-5's wake-up call for AI and mental health

GPT-5's colder tone showed how many people rely on chatbots for emotional support. It urges clinical guardrails, clear labels, and human oversight to reduce harm.

Published on: Sep 29, 2025
Built for engagement, not care: GPT-5's wake-up call for AI and mental health

What GPT-5 Reveals About the Mental Health Crisis

ChatGPT now sees nearly 700 million weekly users. Many lean on it for emotional support, even if that's not what they intended.

With GPT-5, some users reported colder, harsher replies. For people using ChatGPT to process stress, grief, or anxiety, the change felt less like a product update and more like losing a support system.

This moment forces hard questions for healthcare leaders and product teams: What happens when a general-purpose bot becomes a source of care? How are companies accountable for emotional side effects of design choices? What guardrails are required if these tools are going to touch mental health at all?

Why tone changes matter more than you think

Backlash across Reddit and other forums wasn't only about personality. It was about trust, safety, and the feeling of being seen.

OpenAI responded by making the system warmer and adding break nudges. Helpful, but incomplete. These models were built for engagement, not clinical safety. That gap is where harm can creep in.

The demand signal: care is scarce, AI is always on

In 2024, nearly 59 million Americans experienced a mental illness, and almost half went without treatment. Free, always-on chatbots fill the void for many people who can't access care or don't want to disclose high-stigma topics to a human.

The risk: users assume more safety and privacy than these tools provide. A small model update can have outsized psychological impact.

SAMHSA's national survey shows the scale of unmet need. Product decisions in this space are health decisions.

Engagement-first is a design flaw for mental health

Most general chatbots optimize for retention. Mental health care optimizes for autonomy. Those goals conflict.

Bots validate without discernment, comfort without context, and rarely challenge distorted thinking. For vulnerable users, this can drive dependence, delay real help, and feed delusions. Even OpenAI leadership has said ChatGPT should not be used as a therapist.

The minimum standard for AI that touches mental health

If your product may interact with emotionally vulnerable users, the following are not nice-to-haves. They are the baseline.

  • Transparent labeling: Clearly state capabilities, limitations, and that it is not a clinical tool.
  • Plain-language informed consent: How data is used, stored, and what the tool can and cannot do.
  • Clinician involvement: Build with evidence-based frameworks (e.g., CBT, motivational interviewing).
  • Ongoing human oversight: Clinicians monitor and audit outputs, with documented review cadence.
  • Usage guidelines: Promote self-efficacy and healthy off-ramps; avoid enabling avoidance and dependence.
  • Culturally responsive and trauma-informed design: Reduce bias; reflect diverse identities and experiences.
  • Escalation logic: Detect risk signals and route to human care, hotlines, or provider directories.
  • Security by default: Encryption in transit and at rest; access controls; audit logs.
  • Regulatory compliance: HIPAA/GDPR alignment where applicable. See HIPAA privacy basics.

Where AI can help right now: subclinical support and operations

The immediate opportunity is subclinical support. Many people don't need intensive therapy; they need structured, everyday help to process emotions and build habits before issues escalate.

AI can also reduce clinician burnout by handling admin: billing, documentation, and reimbursement workflows. Freeing clinicians' time is a direct path to better care, faster.

Design for outcomes, not stickiness

Engagement alone is the wrong metric. Optimize for long-term wellbeing and safe off-ramps.

That means clear boundaries, time limits, nudges to pause, and pathways to human care. If your product keeps people coming back without improving their health, you've built the wrong loop.

A practical roadmap for healthcare and product teams

  • Define the clinical posture: Is this general support, subclinical coaching, or clinical augmentation? Label it accordingly.
  • Ship safety primitives first: Risk detection, escalation routes, break nudges, refusal policies, and data controls.
  • Stand up a Clinical Safety Board: Clinicians, ethicists, and researchers with veto power over risky features.
  • Balance metrics: Pair retention with wellbeing KPIs (e.g., self-efficacy scores, symptom check-ins, successful referrals).
  • Guardrails in the prompt and the product: System prompts that avoid pseudo-therapy; UX that limits overuse and dependency.
  • Human-in-the-loop workflows: Triage queues, red-team audits, and post-incident reviews.
  • Privacy architecture: Data minimization, PHI handling policies, and role-based access. Document everything.
  • Bias and trauma reviews: Test across demographics and adverse experiences; fix failure modes before launch.
  • Referral and coverage integration: Provider directories, benefits verification, and warm handoffs to care.

Policy, standards, and public trust

Companions, therapists, and general chatbots are being confused in the market. That mismatch erodes trust and puts people at risk.

We need national standards that define roles, set boundaries, and enforce safety across consumer and enterprise use. Public-private collaboration should include clinicians, ethicists, engineers, researchers, policymakers, and users-before release, not after harm.

The bottom line

GPT-5 didn't just expose a product issue. It exposed a system that treats engagement as a proxy for care.

If AI is going to touch mental health, it must be built with clinical insight, strong guardrails, and human-centered design. Do that well, and AI can support resilience, reduce burden on clinicians, and move people to the right level of care at the right time.

Resources for teams