Oregon passes AI chatbot bill giving users right to sue over safety violations

Oregon's Senate Bill 1546 holds AI chatbot operators directly liable for safety and disclosure failures, letting users sue for $1,000 per violation. The bill awaits Gov. Kotek's signature and takes effect in 2027.

Categorized in: AI News Customer Support
Published on: Mar 24, 2026
Oregon passes AI chatbot bill giving users right to sue over safety violations

Oregon Passes AI Chatbot Law With Direct Liability for Operators

Oregon lawmakers approved Senate Bill 1546, which imposes safety, disclosure and liability requirements on AI chatbot providers. The bill passed both chambers and awaits Gov. Tina Kotek's signature. If signed, it takes effect in 2027.

The law creates a private right of action, allowing users to sue operators for violations and claim $1,000 in statutory damages per violation. This shifts enforcement from regulators to individual users.

What the Law Requires

Operators must clearly disclose that users are interacting with AI and maintain documented safety protocols. Systems must detect signals of suicidal ideation or self-harm, interrupt conversations and direct users to crisis resources.

The law prohibits outputs that could encourage harmful behavior or escalate emotional distress. Companies must redesign chatbots to include real-time monitoring, intervention triggers, audit logs and escalation pathways.

Stricter Rules for Minors

Platforms must issue repeated disclosures to young users, restrict sexually explicit content and avoid features that drive emotional dependency or prolonged engagement. Companies must act when there is a "reason to believe" a user is underage, even without explicit confirmation.

Scope and Ambiguity

The law applies broadly to companies deploying conversational AI across healthcare, financial services, education and customer support. Systems that personalize responses, store user context or simulate emotional engagement fall within the scope.

A significant gap exists in how violations are counted. The law does not define whether violations are measured per interaction, per conversation session or per user. This ambiguity increases cumulative liability risk-a single user session could generate multiple claims.

Each real-time intervention, disclosure failure or system output that causes emotional distress creates potential exposure. Companies face legal uncertainty about what constitutes a single violation in continuous conversations.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)