State AI Chatbot Laws Are Here: What Legal Teams Need To Do Now
AI chatbots now sit in customer support, mental health coaching, education, gaming, and everything in between. With growth has come harm, including cases involving minors and vulnerable users. A federal bill is on the table, but its path is unclear. States are moving first, and the mix won't be uniform.
California and New York are stepping in with rules focused on "human-like" chatbots that simulate relationships. Expect more states to follow with overlapping, and sometimes conflicting, obligations. Legal, compliance, and product teams will need a clear playbook to reduce risk and keep products shipped.
What counts as a "companion" chatbot
These systems respond in natural language, remember context, and engage over time. Many use affective or persona cues that make them feel more like a companion than a tool. That's the policy trigger: when a chatbot starts to look and feel human, lawmakers worry about psychological and social harm, especially to minors.
California SB 243 - effective January 2026
California targets "companion chatbots" that provide adaptive, human-like responses to meet social needs, including anthropomorphic features and sustained relationships across multiple interactions. The statute includes a private right of action, creating significant litigation exposure. Plaintiffs can seek actual damages or statutory damages of $1,000 per violation.
- Disclosures: Tell users they're interacting with AI if a reasonable person could think it's a human.
- Crisis protocol: Maintain and publish a crisis response plan; file annual safety reports with the Office of Suicide Prevention.
- Minors: For known minors, remind every three hours that the chatbot isn't human, encourage breaks during long sessions, and implement reasonable measures to prevent sexually explicit content.
- Exclusions: Bots used solely for operational utility, certain in-game characters limited to game-related dialogue, and basic voice assistants.
Bill text and updates will appear on the California Legislative Information site: SB 243.
New York's companion law - effective in November
New York covers "AI companions" that simulate sustained relationships by doing three things: retain prior interactions for personalization, initiate unprompted emotion-based questions beyond direct prompts, and sustain ongoing dialogue on personal matters. No carve-out for game characters.
- Crisis protocol: Detect self-harm indicators and refer users to crisis services.
- Disclosures: Notify users at least once per day and every three hours during ongoing interactions that they are engaging with AI, not a human.
- Penalties: Up to $15,000 per day per violation.
Track bill text and updates via the New York State Legislature: Legislation.
Other state activity
Massachusetts proposed disclosures and would give legal effect to chatbot communications, treating them like statements by human agents. Maine prohibits use of AI chatbots in trade or commerce that could mislead consumers into believing they are interacting with a human, unless a clear and conspicuous disclosure is provided. Expect more bills with different scoping tests.
Edge cases most teams overlook
- Gaming NPCs: If characters remember choices, express simulated emotions, or discuss topics beyond the game (e.g., mental health, self-harm, sexual content), California's exclusion may not apply. New York has no exclusion.
- Virtual influencers: Parasocial "friend" dynamics and ongoing personalized dialogue can pull them into scope.
- Wellness and language learning apps: Not labeled as therapy, but emotionally aware chat and persistent memory can still qualify.
Fast scoping test before you ship
- Purpose/persona: Is the bot framed as supportive, companion-like, or social?
- Memory: Does it retain user details across sessions to personalize?
- Initiation: Does it ask unprompted, emotion-based questions?
- Content scope: Can it discuss mental health, self-harm, sexuality, or personal matters unrelated to the core service?
- User base: Any known minors or likely minor users?
- Safeguards: Self-harm detection, sexual content filters, break prompts, and escalation paths.
- Form factor: Voice, avatar, or anthropomorphic traits that could mislead a reasonable person.
Compliance checklist you can start this quarter
- Disclosures: Persistent UI labels and timed reminders (e.g., daily and every three hours for ongoing sessions where required).
- Crisis response: Classifiers for self-harm indicators; scripted referrals to crisis services; staff playbooks; logging for audits.
- Minors: Age-assurance flows, periodic "not a human" reminders, break nudges, and sexual content filtering.
- Reporting: Annual safety reports (California); internal incident taxonomy and escalation SLAs.
- Testing: Red-team runs focused on grooming, self-harm, sexual content, and impersonation; scenario coverage tracked to closure.
- Records: Interaction logs, model/version history, policy changes, risk assessments, and approval memos.
- Product controls: Geo-gated features, rate limits, safety "modes," and kill-switches for high-risk outputs.
- UI copy: Clear, plain-language disclosures; avoid designs that could mislead a reasonable person.
Litigation and enforcement risk
California's private right of action invites individual and class claims. New York's daily penalties stack quickly. Expect state AG scrutiny under unfair and deceptive acts and practices theories, especially for misleading human-like design or weak crisis protocols.
Review arbitration clauses, class action waivers, and warranty disclaimers, but don't lean on them as a shield. Plaintiffs will point to logs, safety gaps, and internal memos. Your best defense is documented, proactive controls.
Contracts, insurance, documentation
- Vendor contracts: Allocate disclosure duties, crisis-response obligations, age-assurance, logging, and update support. Add audit rights.
- Warranties/indemnities: Harm-reduction features, compliance with applicable state laws, and coverage for regulatory actions.
- Insurance: Review coverage for chatbot-related harms, especially those involving minors and mental-health contexts.
- Documentation: Capture intended use, out-of-scope uses, risk ratings, DPIAs/TRA notes, approval sign-offs, and change logs.
Two workable strategies for the patchwork
- Highest common denominator: Apply the strictest requirements nationwide. Simpler ops, higher cost.
- Geo-gated compliance: Turn on features and obligations by state. Lower footprint, more engineering and QA overhead.
Whichever route you choose, start with an inventory of every conversational surface your company runs. Flag anything with memory, emotion, or ongoing dialogue. Then build disclosures, crisis flows, and logging into the product, not as an afterthought.
Key dates and actions
- Now: Inventory chatbots, map features to state triggers, and gap-assess disclosures and crisis protocols.
- Before New York's effective date in November: Implement timed notifications and self-harm detection plus referral flows.
- Before January 2026 (California): Stand up reporting to the Office of Suicide Prevention, minor-specific measures, and plaintiff-ready documentation.
- Ongoing: Quarterly red-team tests, policy refreshes, and contract updates with vendors powering chat features.
Bottom line
If your chatbot remembers, empathizes, or keeps the conversation going, treat it as in scope until proven otherwise. Build disclosures and crisis handling into the UX, keep clean records, and choose a compliance strategy you can operate. Early action beats discovery requests every time.
Your membership also unlocks: