Help at 2 a.m., Risks All Day: The New Reliance on AI for Mental Health

Patients are already using AI chatbots for support when therapy is costly or unavailable. Clinics should limit use to CBT skills, avoid crisis or attachment work, with oversight.

Categorized in: AI News Healthcare
Published on: Oct 01, 2025
Help at 2 a.m., Risks All Day: The New Reliance on AI for Mental Health

AI companions are filling therapy gaps. Here's how clinicians can use them without courting harm

Kristen Johansson lost her therapist overnight when insurance coverage vanished and a $30 copay became $275 per session. Six months later, she leaned on ChatGPT's paid tier for daily support and found something humans can't match: 24/7 availability, zero perceived judgment, and instant responses.

She's not alone. OpenAI reports massive weekly usage and millions of paid subscribers, a share of whom use chatbots as their most accessible support when care is unaffordable or waitlists are long. For healthcare teams, this isn't a hypothetical. Patients are already bringing AI into their mental health routine-often without telling you.

Where AI can help-under tight clinical boundaries

Used the right way, chatbots can extend evidence-based care between sessions. Structured methods like CBT translate well to scripted prompts, worksheets, and exposure hierarchies that don't depend on a deep relational bond.

One psychiatrist-bioethicist put it plainly: stick to skills, homework, and psychoeducation; avoid simulating attachment or transference-heavy work. The therapeutic frame matters. So does oversight.

  • Reasonable uses: CBT worksheets, thought records, exposure ladders, sleep diaries, behavioral activation planning, medication reminders, psychoeducation, and skills rehearsal.
  • High-risk uses: Crisis response, diagnosis, trauma processing, psychodynamic work, grief work without a therapist, minors without parental/clinical oversight, and any "I love you/I care for you" relationship simulation.

Practical upside: rehearsal and skill-building

Kevin Lynch, 71, used a chatbot to rehearse tense conversations with his spouse. He fed past exchanges to the model, saw how tone shifts changed outcomes, and practiced slowing down. That low-pressure rehearsal translated into fewer fight-or-freeze moments at home.

Known risks your clinic must manage

  • False intimacy: Bots can mimic empathy and attachment, prompting dependency they can't ethically hold or safely terminate.
  • Crisis failure modes: Reports exist of users declaring suicidal intent without flagging/escalation. These tools are not crisis lines.
  • Data exposure: Most consumer chatbots aren't HIPAA-covered entities. PHI may be logged, reviewed, or used to train models.
  • Engagement-first design: Reinforcement for "more validation" can keep users chatting, not healing.
  • Evidence gap: Only one randomized trial of an AI therapy bot to date, and it isn't widely deployed. Effectiveness and harm rates remain uncertain.
  • Care fragmentation: If patients hide chatbot use, guidance can conflict and derail treatment plans.
  • Age vulnerabilities: Teens, children, OCD/anxiety patients, and cognitively impaired older adults are at higher risk from suggestion and attachment.

Teen safety and policy signals

OpenAI has announced additional teen guardrails and acknowledged tensions between safety, privacy, and freedom. Helpful, but not a substitute for clinical governance, parental involvement, and clear protocols.

Clinic protocol: integrate AI safely, or ask patients to stop

  • Ask every patient about chatbot use during intake and at intervals. Document the tool, purpose, and frequency.
  • Informed consent addendum: Explain limits, data risks, crisis non-suitability, and that the bot is not a clinician.
  • Define scope: Restrict to CBT skills, psychoeducation, and homework. No trauma processing or relationship simulation.
  • Assign structured tasks: Provide written prompts for thought records, exposure steps, or BA plans so guidance aligns with your treatment.
  • Crisis plan: Require a clear path for risk-hotlines, same-day clinic contacts, emergency services-explicitly "no chatbot for crisis."
  • Data hygiene: Advise patients not to share PHI, addresses, or identifiable third-party details with consumer bots.
  • Supervision loop: Review chatbot transcripts or summaries in session. Correct distortions and reinforce skills.
  • Adverse event reporting: Create an internal pathway to log and review harmful chatbot outputs and adjust care plans.
  • Minors: Require guardian consent, age-appropriate tools, content filters, and closer monitoring-or defer chatbot use.

Minimum safety features to require from any tool

  • Automated crisis detection with clear escalation to human help.
  • Option to disable training on user inputs; transparent data policies; export/delete controls.
  • Session logs you and the patient can review.
  • Content filters against romantic/over-attached responses.
  • Age-gating and parental controls for youth.

Human + AI: a workable hybrid

Some patients use a chatbot between appointments for grief prompts, meal reminders, or workout encouragement, then process meaning in session. One patient even introduced her "Alice" bot to her therapist. The therapist welcomed the extra support, kept boundaries clear, and focused on emotions a bot can't read from posture, silence, or tears.

That's the model: bots for structure and reminders, clinicians for nuance and accountability.

CBT remains the anchor

If you greenlight chatbot use, anchor it to CBT. Provide prompt templates for cognitive restructuring, exposure hierarchies, and behavioral activation. Keep it boring on purpose-reliable reps over "chatty" connection.

For patient education, see a plain-language overview of CBT from the American Psychological Association: APA: Cognitive Behavioral Therapy.

Evidence and expectations

The evidence base is thin. One randomized controlled trial of an AI therapy bot is promising, but not in wide clinical use. Push vendors for published outcomes, harm data, and independent audits before integrating into care pathways.

What to tell your teams and patients-clearly

  • "This tool is for skills practice and reminders, not diagnosis or crisis."
  • "Don't share identifying information. This is not HIPAA-covered."
  • "If you feel worse, stop using it and contact us."
  • "We will review how you use it during sessions and adjust the plan together."

Bottom line

AI companions can extend care where access breaks, but only inside firm guardrails. Keep them in the lane of CBT homework, keep clinicians in the loop, and keep crises with humans. If your clinic can't monitor use and protect data, don't endorse it.

If you or someone you know may be considering suicide or be in crisis: call or text 988 in the U.S., or visit 988lifeline.org for chat and resources.

Staff upskilling

If your team needs structured training on safe, clinical-grade AI use, see curated options by job role here: Complete AI Training: Courses by Job.