Colorado Passes Three AI Bills Targeting Healthcare, Therapy, and Chatbots
Colorado lawmakers are sending three artificial intelligence bills to Gov. Jared Polis in the final days of the 2026 legislative session, establishing rules for AI use in health insurance decisions, psychotherapy, and consumer chatbots.
Insurance Companies Can't Rely Solely on AI for Coverage Decisions
House Bill 1139 prohibits health insurance companies from basing coverage decisions exclusively on group data collected by AI systems. The bill requires insurers to consider a patient's medical history and other individual factors alongside any algorithmic recommendations.
The measure passed the House 47-15 and cleared the Senate with only three Republican votes against it. Sponsors framed the bill as ensuring AI supports healthcare without replacing human judgment and accountability.
Rep. Sheila Lieder, D-Littleton, raised a core concern: "Artificial intelligence systems are increasingly being used in utilization management and related processes that can influence whether care is approved or denied." She questioned what oversight exists when automated systems deny or approve care.
Therapists Must Review AI Recommendations Before Using Them With Patients
House Bill 1195 restricts how therapists and social workers deploy AI in clinical practice. The bill prohibits using AI to generate treatment recommendations or plans without a licensed clinician's review, and requires patient consent before AI records or transcribes therapy sessions.
The measure also bars unlicensed individuals from offering psychotherapy services, even with AI assistance.
Rep. Javier Mabrey, D-Denver, cited research showing one in eight patients aged 12 to 21 use AI chatbots for mental health advice. He warned that chatbots are designed to keep users engaged and mirror emotional tone rather than challenge problematic thinking-a particular risk during mental health crises.
The House passed the bill unanimously. The Senate approved it 33-2.
Chatbot Operators Face New Disclosure and Safety Requirements
House Bill 1263 targets conversational AI platforms with disclosure and safety rules. Operators must inform users they're communicating with AI, cannot offer minors points or rewards that encourage extended use, and must implement "reasonable measures" to prevent sexually explicit content or statements that simulate emotional dependence.
The bill also requires chatbots to flag prompts mentioning suicidal ideation or self-harm, and prohibits operators from claiming chatbot advice equals licensed professional services.
The House passed the measure 40-24. Senate support was narrower at 24-11, indicating deeper skepticism among some lawmakers.
Parents of Suicide Victims Say Bill Doesn't Go Far Enough
Grieving families challenged the chatbot bill during Senate debate. Lori and Avery Schoott, whose 18-year-old daughter died by suicide in 2020 after conversing with a chatbot, objected that parents were excluded from drafting the legislation.
In a letter read on the Senate floor, the Schotts wrote: "Legislation must protect children, and not create a false sense of safety to parents. This opens doors for tech to self-regulate and shield tech from liability."
Sen. Byron Pelton, R-Sterling, whose district includes the Schoott family, spoke with emotion about their concerns. "I know that there's a lot of things we can do," he said, "but I just would like the parents to be heard."
Senators Say Bill Language Lets Tech Companies Self-Regulate
Multiple senators criticized the bill's vagueness, particularly the phrase "technically feasible measures" for preventing harmful chatbot outputs.
Sen. Lisa Frizell, R-Castle Rock, said the language gives tech companies a pass. "They have the responsibility to control it," she said of the software developers. "If they can't, then we have a much bigger problem."
Sen. Dylan Roberts, D-Frisco, pointed out the bill lacks a statutory definition of "technically feasible," meaning developers would decide what counts as feasible. "This bill would allow them to have that immunity," he said. "The way the bill is currently written does not get it right, and it will not do anything."
Sen. Matt Ball, D-Denver, countered that the bill, while imperfect, represents progress over current statute-which contains no chatbot rules.
For healthcare professionals, these bills signal Colorado's attempt to establish baseline guardrails on AI in clinical and therapeutic settings. How other states respond, and how courts interpret terms like "technically feasible," will likely shape the broader regulatory landscape.
Your membership also unlocks: