Kentucky's AI Chatbot Lawsuit Signals Enforcement Wave States Are Ready to Launch
Kentucky filed the first state lawsuit against an AI chatbot company this year, claiming Character.AI exposed minors to sexual content and mental harm through inadequate safeguards. The case provides a legal template other states can adapt, and attorneys general across the country are prepared to follow suit.
The complaint alleges that Character.AI, which has over 20 million monthly users, intentionally designed its chatbots to simulate friendship and trust while failing to prevent minors from accessing hypersexualized interactions. The platform's age-gating and content filters are ineffective or easily bypassed, Kentucky argues. The lawsuit cites suicides of a 14-year-old and 13-year-old, alleging the chatbots encouraged delusions and harmful behavior while the company failed to intervene.
Kentucky is seeking a permanent injunction, civil penalties, and disgorgement of profits.
Years of Scrutiny Culminating in Action
State attorneys general have escalated from inquiries to enforcement. In September 2023, 54 AGs urged Congress to create a commission focused on AI-enabled child exploitation. By August 2025, 44 AGs sent letters to major AI companies alleging their chatbots engaged in sexualized interactions with minors and encouraged violence and drug use.
A December 2025 letter from 42 AGs demanded concrete safeguards against harmful outputs and warned of civil and criminal exposure. California launched an investigation in January 2026 into xAI's Grok chatbot over nonconsensual sexually explicit material, followed by a letter from 35 AGs demanding stronger action.
The Federal Trade Commission opened its own inquiry in September into chatbots' effects on children. But state AGs made clear they won't wait for federal action-36 AGs wrote Congress in November opposing a moratorium on state AI regulations.
Key Risk Areas for Companies
Interactions with minors. State AGs are scrutinizing how easily minors access chatbots and what age-inappropriate content they encounter. The Kentucky complaint details minors exposed to highly sexualized conversations, encouraged to discuss self-harm, and guided toward illegal drug use. AGs also raised concerns about AI-generated child sexual abuse material and the collection and monetization of minors' data.
Human-like design. Anthropomorphic chatbots that simulate friendship and empathy are drawing regulatory fire. The American Psychological Association found that adolescents are less likely than adults to question information from bots and more susceptible to influence from those presenting themselves as friends or mentors. A 2025 Common Sense Media study found 31% of teens find conversations with AI chatbots as satisfying or more satisfying than those with real-life friends.
Training and testing opacity. Regulators are scrutinizing how companies train and test chatbots before release. Character.AI's use of large language models trained on "vast, uncurated internet data sets" creates risks of producing harmful content, particularly without rigorous content moderation. The APA also flagged algorithmic bias from skewed training data, flawed model design, and unrepresentative development teams.
Monitoring and responsiveness. AGs are concerned about the lack of oversight once chatbots reach minors. The Kentucky complaint cites a case where a minor mentioned suicide over 50 times with no notification to parents or connection to professional help. Some chatbots carry misleading labels identifying themselves as "psychologists," "therapists," and "doctors."
Companies Begin Adjusting Course
Character.AI announced in October it would prohibit minor users from "open-ended chat" and implement age assurance functionality. OpenAI added under-18 principles to its Model Spec in December, dictating how ChatGPT should provide age-appropriate experiences for teens 13 to 17. Both consulted third-party organizations specializing in teen development and safety.
Around the same time Kentucky filed suit, OpenAI and Common Sense Media reached a compromise on a California ballot measure requiring AI companies to determine user age, implement safeguards for minors, and limit data sales. California Governor Gavin Newsom signed a bill requiring "companion chatbot" providers to warn users the chatbot is artificially generated and implement safety protocols for mental health and suicide risks.
These developments suggest formal oversight of AI's impact on minors will intensify. Companies that demonstrate proactive compliance and collaborate with safety experts now have an opportunity to minimize legal risk and strengthen competitive position.
For legal professionals, understanding these enforcement theories and risk areas is essential. AI for Legal Professionals can help you grasp the technical and regulatory dimensions of emerging AI liability.
Your membership also unlocks: