AI Chatbots on Notice: California's SB 243, FTC 6(b), and Preparing for Subpoenas and CIDs

AI chatbots face lawsuits and new rules, with a spotlight on risks to minors. Expect tougher disclosures, age checks, and FTC scrutiny as states roll out companion bot laws.

Categorized in: AI News Legal
Published on: Jan 20, 2026
AI Chatbots on Notice: California's SB 243, FTC 6(b), and Preparing for Subpoenas and CIDs

AI Chatbots Face Rising Legal and Legislative Scrutiny

January 19, 2026

AI chatbots now sit inside social platforms, search, and classrooms. With that reach comes a surge of lawsuits and government interest, especially around mental health risks for minors. Parents and former users have sued OpenAI over alleged failures to prevent suicidal ideation, bringing assisted-suicide, wrongful-death, and manslaughter claims. Expect more subpoenas and civil investigative demands (CIDs) as agencies press for answers.

What regulators are doing right now

  • California's SB 243 took effect January 1, 2026, setting rules for "companion chatbots."
  • Similar bills are on the table in Florida (HB 659), Massachusetts (S 264 / S 243), Missouri (HB 2032 / HB 2031), New Jersey (A 6246), Pennsylvania (SB 1090 / HB 2006), Washington (SB 5870), and Tennessee (HB 1455 / SB 1493).
  • A California ballot initiative, the "Parents & Kids Safe AI Act," would add age-assurance and new limits on selling children's data if it qualifies for the November ballot.
  • Forty-two attorneys general warned that "sycophantic and delusional" chatbot outputs may violate consumer-protection and children's privacy laws.
  • The FTC launched a 6(b) inquiry into AI companion products from Google, Character.AI, Meta, Snapchat, and OpenAI.

California's SB 243: scope and duties

SB 243 targets AI systems that provide adaptive, human-like responses and sustain relationships across multiple interactions. It excludes customer-service bots, video game features, and general voice assistants like Alexa and Siri.

  • Disclosure: If a user could be misled into thinking they're chatting with a human, the provider must clearly state it's AI.
  • Minors: For users under 18, providers must disclose the use of AI, send a recurring reminder every three hours that the chatbot is "not human," and take reasonable steps to prevent certain explicit content.

California ballot measure: Parents & Kids Safe AI Act

After the veto of AB 1064, Common Sense Media advanced a comprehensive kids' AI-safety initiative. Following public pushback and a competing concept from OpenAI, both sides reached a compromise proposal. If it gathers enough signatures, voters will decide this November. Expect age assurance, risk-based duties, and restrictions on selling data tied to minors.

Federal pressure: FTC 6(b) and Congressional interest

The FTC's 6(b) orders seek detail on how major companies build and run companion chatbots. The agency is probing business models, safety claims, and how providers test for harm, especially for children.

  • Monetization strategies, including subscriptions and in-app purchases
  • Third-party data sharing
  • Personalization and companion features in development and deployment
  • Post-deployment testing for negative impacts
  • Safeguards for minors and mitigation steps
  • Data collection, retention, and deletion practices
  • Substantiation for public statements on capabilities, safety, and suitability for minors

On October 21, 2025, Senators Alex Padilla and Adam Schiff urged the FTC to broaden its inquiry. Their letter pressed for review of mental-health crisis detection, response protocols, and the limits of current safeguards.

If you receive a subpoena or CID: move fast and get organized

  • Call counsel immediately. Align on scope, deadlines, and a response plan.
  • Issue a litigation hold. Preserve chat logs, safety evaluations, user reports, marketing claims, and internal comms. Pause routine deletion where needed.
  • Map your product surfaces and data flows. Flag features that sustain multi-session relationships, personalization logic, and any minor-specific pathways.
  • Assemble a cross-functional team: legal, trust & safety, security, product, marketing, and data governance.
  • Validate public claims. Ensure you can substantiate statements about capabilities, safety, and suitability for minors.
  • Document safeguards for minors: break reminders, "not human" disclosures, content filters, and escalation paths for crisis signals.
  • Centralize testing and incident evidence: red-team results, post-deployment harm testing, child-safety experiments, and mitigation changes over time.
  • Review third-party contracts and data sharing. Be ready to explain who gets what data, why, and for how long.
  • Prepare a concise narrative of risk controls, oversight, and continuous improvement. Keep it factual and audit-ready.

Practical next steps for in-house and outside counsel

  • Gap-assess against SB 243. Confirm disclosures, minor reminders every three hours, and content limits are live and logged.
  • Track state bills listed above. Build a baseline policy that can flex for state-by-state differences.
  • Refresh your advertising review program. No safety claims without evidence. Archive substantiation.
  • Stand up a children's data playbook: collection minimization, retention limits, deletion workflows, and review of sales/transfer restrictions.
  • Run a crisis-response tabletop. Define triggers (ideation cues), escalation, human-in-the-loop checks, and referral language.
  • Create an inquiry kit: org chart, data maps, policy index, system diagrams, and contact points for rapid regulator engagement.

What to watch in 2026

  • Outcomes from the FTC's 6(b) inquiry and any follow-on enforcement.
  • Progress and amendments in Florida, Massachusetts, Missouri, New Jersey, Pennsylvania, Washington, and Tennessee.
  • Whether the California compromise initiative qualifies for the ballot and how voters respond.
  • Litigation trends in wrongful-death and product-liability suits tied to chatbot interactions.

Bottom line: chatbot providers should expect more questions, more paperwork, and stricter expectations-especially where minors are involved. Tighten disclosures, harden safety controls, and make sure your claims match your evidence. The best defense is a clear record of how you build, test, monitor, and improve your systems.

If your legal or compliance team needs to upskill on AI products and risk controls, see curated training by job role at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide