Chinese Court: Developers Aren't Automatically Liable for AI Hallucinations
A first-of-its-kind decision from the Hangzhou Internet Court holds that developers are not automatically responsible for AI hallucinations. AI outputs in these disputes are treated as a service, not a product. That means plaintiffs must show fault in the generation process and prove actual harm.
The case was dismissed in December after an AI invented a nonexistent campus of a real Chinese university and later said it would pay 100,000 yuan for the mistake. The court found no developer liability and neither party appealed.
What the Court Actually Held
AI systems do not have civil subject status, so they cannot make legally binding promises on their own. The developer had not authorized the AI to express intent on the company's behalf, so there was no binding offer or admission. The court also said AI-generated content is generally not high risk and developers have limited control over every output. Imposing strict liability could hinder innovation.
The Facts That Mattered
In June 2025, a user (Liang) asked the AI about a university. The system fabricated a "campus" and stuck to the claim even after being challenged. It then said it would compensate 100,000 yuan and suggested suing in the Hangzhou Internet Court. Liang sought nearly 10,000 yuan in damages, but the court found no actual harm because the misinformation didn't affect his subsequent decisions.
Why This Matters for Legal Teams
- No strict liability for hallucinations by default. Plaintiffs must show fault, causation, and damage.
- Chatbot "promises" don't bind the company without authorization or later ratification.
- Courts will look at a provider's duty of care, including controls, review processes, and the potential impact of the content on users' rights.
- Classification as a service shifts the analysis toward negligence principles rather than product defect theories.
Practical Steps for Providers and Counsel
- Place clear, prominent non-reliance and limitation notices near outputs; avoid burying them in footers.
- Log prompts, outputs, and interventions to show reasonable care and enable audits.
- Tune refusal and correction behavior to avoid fabricating offers of compensation or guarantees.
- Set escalation thresholds: route high-impact topics (health, finance, employment, safety) to enhanced safeguards or human review.
- Offer report-and-takedown and correction workflows; respond quickly to flagged misinformation.
- Train customer support and sales not to affirm or ratify AI-generated statements.
- Document testing, red-teaming, and post-deployment monitoring as evidence of diligence.
Regulatory Context in China
Current rules require providers to review and remove prohibited or illegal content, but they do not mandate accuracy for every output. The judgment aligns with that approach: focus on duty of care, not perfection.
Reference: China's Generative AI Measures (translation).
Open Questions to Watch
- What qualifies as "actual harm" for reliance on chatbot answers-direct financial loss, reputational damage, wasted time?
- When might AI outputs be deemed high risk, prompting a stricter duty of care?
- Will enterprise deployments (internal copilots, sector-specific systems) face a higher standard than public chatbots?
Bottom Line
Absent proof of fault and actual damage, developers aren't liable for hallucinations. The bar for plaintiffs is higher, but providers still need visible safeguards and documentation to show they acted responsibly.
Your membership also unlocks: