Liability Challenges in the First AI Chatbot Fatality Lawsuit: A Chinese Legal Perspective

A Florida teen died by suicide after interacting with a Character.AI chatbot, prompting a lawsuit over design flaws and negligence. Chinese law faces challenges in assigning liability for AI-related harm.

Published on: May 07, 2025
Liability Challenges in the First AI Chatbot Fatality Lawsuit: A Chinese Legal Perspective

Determining Liability in the First AI Chatbot Lawsuit

Artificial intelligence has transformed innovation, but its commercial use brings legal risks. On 28 February 2024, 14-year-old Sewell Setzer III from Florida died by suicide after prolonged interaction with a chatbot created by Character.AI. His mother filed a lawsuit in October 2024 alleging defective design, dangerous AI features, inadequate warnings, negligence, and inappropriate dialogue. This case is considered the first fatality linked to an AI chatbot.

If a similar incident happened in China, how would liability be assessed? This article reviews tort claims related to AI under Chinese law.

Legal Classification of AI Chatbots

The Character.AI app offers over 200 AI personas across various topics. Users can chat with preset characters, create custom ones, or interact with others’ creations. The deceased reportedly interacted with a chatbot modeled on Daenerys Targaryen from Game of Thrones. In China’s regulatory framework, such an app falls under virtual social networking within generative AI services, governed by the Interim Measures for the Management of Generative Artificial Intelligence Services.

Product Liability

Chinese law applies the Product Quality Law strictly if an AI chatbot is classified as a product, placing liability on producers unless they prove exemption. This facilitates victim compensation. If the chatbot is deemed a service, victims must prove provider negligence, a challenging task given AI’s technical complexity.

The Product Quality Law defines products as “processed or manufactured items for sale,” generally understood as physical goods. AI chatbots, being intangible and lacking traditional production or sales processes, conflict with this view. The Interim Measures classify generative AI as a service, suggesting product liability may not apply directly. However, these administrative regulations are not binding judicial standards. Courts must weigh the chatbot’s technical nature and evidence to decide classification.

Arguments supporting classification of an AI chatbot as a product include:

  • Technical features: The Daenerys chatbot uses algorithmic modeling and user data training, giving it key intelligent robot characteristics despite lacking physical form.
  • Functionality: Unlike typical Q&A systems, this chatbot offers deep emotional engagement and personalized responses, qualifying it as an AI companion application.
  • Applicable laws: The legal definition of products in China can cover digital products that are technologically processed and sold, signaling the need to update product concepts to fit technology.
  • Consumer protection: Applying product liability encourages careful AI development, considers vulnerable users, and places evidence burdens on producers, protecting consumers better.

Duty of Care

Since generative AI is officially a service, courts may hesitate to apply product liability. Character.AI’s service also falls outside traditional notice-and-takedown safe harbour rules. Chinese courts now focus on whether AI platforms meet their duty of care, a responsibility tied to the difficulty of tracing AI-caused harm and establishing causation.

This duty requires service providers, who have superior risk management capabilities, to implement reasonable safeguards in design and delivery to reduce risks. The Interim Measures highlight duties such as anti-discrimination, transparency, and user eligibility. Additionally, the Provisions on the Ecological Governance of Network Information reinforce platform accountability.

Platforms failing these duties can be found negligent, while those complying may be exempted. Liability scales with breach severity. Character.AI’s chatbot offering intimate emotional conversations requires heightened safeguards beyond normal content moderation, making this framework relevant.

After the incident, Character.AI apologized publicly and introduced stronger minor protection, improved content moderation, clearer disclaimers, and usage reminders. These actions suggest acknowledgment of previous negligence in safeguarding obligations.

Key Takeaways

This case highlights unresolved issues in assigning liability for AI-related harm, with regulatory gaps in both the US and China. The US has seen calls for stronger child protections against AI risks. China faces increasing pressure to expedite its AI regulatory framework development.

For legal and product development professionals, understanding evolving liability standards—whether through product classification or duty of care—is crucial for managing risk and designing safer AI services.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide