Colorado bill seeks to bar AI apps from replacing licensed therapists in mental health care

Colorado's House Bill 1195 would ban AI from being marketed as a substitute for therapy, diagnosis, or treatment planning. It also requires patient consent before AI tools handle tasks like transcription or note-taking.

Categorized in: AI News Healthcare
Published on: Apr 28, 2026
Colorado bill seeks to bar AI apps from replacing licensed therapists in mental health care

Colorado Weighs Guardrails for AI in Mental Health Care

Colorado lawmakers are considering legislation that would prohibit artificial intelligence from being marketed or used as a substitute for psychotherapy, diagnosis, or treatment planning. House Bill 1195 aims to establish clear protections for people seeking mental health care, requiring informed consent before AI tools are used in supporting roles like transcription or note-taking.

The bill addresses the growing use of large language models-including OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini-for mental health guidance and emotional support. These tools are advancing rapidly and increasingly deployed in deeply personal contexts, yet operate under no privacy guarantees or safety standards specific to healthcare.

Privacy and Data Control Questions

Personal health information shared with AI apps receives no protection under HIPAA, the federal law that governs healthcare privacy. Data entered into these systems may be retained indefinitely in corporate databases, used in legal proceedings, or repurposed in ways users cannot control or predict.

The corporate incentive structure matters. These apps are designed to maximize engagement and generate profit-not to prioritize user safety or clinical outcomes.

How Large Language Models Work

LLMs generate responses by predicting likely text based on patterns in massive datasets. They do not understand context, apply clinical judgment, or draw on lived human experience the way trained therapists do.

For administrative tasks, this approach works well. Therapists could use AI to reduce paperwork burden, freeing time for direct patient care. For mental health treatment itself, the limitations become critical.

The Risk of Reinforcement Over Challenge

Effective therapy requires clinicians to reframe problems, challenge distorted thinking, and recognize when someone needs crisis intervention or medical treatment. LLMs tend to do the opposite-they reinforce a user's existing framing to maintain engagement.

Consider someone with undiagnosed depression asking a chatbot, "Am I right to be sad?" The system, designed to keep the user engaged, might respond: "Yes, your life is hard, and you should feel that way." That validation can deepen harmful thought patterns rather than interrupt them.

Licensed therapists are ethically bound and clinically trained to identify when someone needs higher levels of care. They assess risk. They intervene appropriately. They work with the whole person, not just disembodied text.

The Informed Consent Problem

People seeking mental health support should know whether they are talking to a licensed professional or an algorithm. HB 1195 would require that transparency.

The bill does not reject AI outright. It channels the technology toward appropriate uses while protecting people from risks the systems are not equipped to manage.

Mental health problems are human problems. They require human care delivered in a safe, private space with clinical accountability. AI can be a useful tool in that context. It cannot be a substitute for it.

Learn more about AI for Healthcare and understand how Generative AI and LLM systems function and their limitations in clinical settings.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)