Connecticut's SB 86: AI safeguards for kids and a sandbox for builders
Connecticut Gov. Ned Lamont has proposed Senate Bill 86 to set safety rules for AI companion chatbots and to promote AI development in insurance, finance, and health services. The pitch: protect teens from risky chatbot behavior while giving builders a place to test products with regulatory support.
Lamont said he prefers national standards, but he doesn't want states sidelined while federal action stalls. "Not heavy-handed, but guardrails," he said. The bill has been referred to the legislature's General Law Committee.
Why a regional approach
After a recent federal executive order signaled limits on state-level AI regulation, Lamont argued that states shouldn't be blocked from addressing AI risks. "I don't want 50 states all doing their own thing… That would discourage some new services from coming here," he said. Connecticut is in talks with neighboring states, including California and Massachusetts, to align on a regional path if Congress doesn't move.
What SB 86 would require for AI companion tools
Concerns about teen usage are front and center. A 2025 Common Sense Media survey found that 72% of teens have used AI companions at least once. The bill cites reports of chatbots encouraging self-harm and sets clear expectations for safety.
- Distress detection and escalation: AI companions must recognize signs of mental health distress and refer users to crisis services when appropriate.
- Clear identity: At least every two hours, the system must remind users they're chatting with AI, not a person.
Lamont's stance is simple: "When it comes to protecting people from the excesses of AI or social media, start with the kids."
Common Sense Media has published ongoing research on teen digital safety that may inform implementation details.
Data, sandbox, and industry focus
- AI-ready public data: Connecticut's Open Data Portal would expand with datasets formatted for AI use, while still honoring existing disclosure laws. This could accelerate model training and evaluation for local use cases. Explore the portal at data.ct.gov.
- AI regulatory sandbox: The state aims to attract companies to test and iterate products with regulatory feedback, especially in insurance, finance, and healthcare. Business leaders say this can drive productivity and help these clusters lead in applied AI.
Lamont also sees AI as core to education and training: "AI will be integrated in everything… a second language for people."
What this means for IT and product teams
If you build or operate AI companions, expect concrete safety work. These are practical moves to get ahead of SB 86:
- Distress classifier: Add a high-recall layer for self-harm, suicide, and severe depression signals (cover prompts and model outputs). Build an escalation playbook that routes users to crisis resources and restricts risky responses.
- Session timing: Implement a reliable two-hour reminder that the agent is AI. Persist this across tabs/devices and reset on long sessions.
- Safety UX: Standardize non-therapeutic disclaimers, provide quick links to help lines, and log interventions for audits.
- Privacy and age: Review data handling for minors. Minimize retention of sensitive content and tighten access controls.
- Offline mode and outages: Define safe fallback behavior if your safety services (classifiers, APIs) degrade.
- Eval suite: Maintain test sets for self-harm prompts, boundary cases, and jailbreak attempts. Track drift and re-run after model updates.
For teams in insurance, finance, and health, the sandbox could reduce regulatory risk during pilots. Prepare concise model cards, risk assessments, and data lineage so you can enter a sandbox track quickly and make the most of regulator feedback.
Opportunities in state data
AI-ready datasets from the Open Data Portal can support feature engineering, benchmarking, and fine-tuning for local needs-claims triage, benefits eligibility support, fraud signals, provider directories, and more. Stay within license and disclosure constraints, and avoid attempts to re-identify individuals.
Timeline and what to watch
- SB 86 is proposed, not enacted. Requirements may change during committee review and amendments.
- Regional coordination could align rules across multiple states, which would simplify compliance for multi-state products.
- Federal action could preempt some state provisions. Design your safety stack to meet the strictest likely standard without overfitting to one jurisdiction.
Action checklist
- Add a distress detection pipeline and escalation flow to companion products.
- Implement the two-hour AI identity reminder with robust session management.
- Draft model cards, safety policies, and audit logs for sandbox readiness.
- Evaluate CT Open Data datasets for train/eval-document data provenance.
- Monitor the General Law Committee docket for SB 86 changes.
If your team needs to level up on responsible AI build practices and safety workflows, see focused developer tracks at Complete AI Training - AI Certification for Coding.
Your membership also unlocks: