AI Puts Section 230 on Trial-and Big Tech on the Hook
Generative AI blurs Section 230's shield: platform-authored outputs look like speech, not hosting. Teen-harm suits suggest courts will weigh design choices, not moderation.

Why Section 230's "26 words" may not shield AI platforms from liability
For years, platforms leaned on Section 230 of the Communications Decency Act to defeat lawsuits tied to harmful third-party content. Generative AI disrupts that playbook. When a system creates the words itself, the platform starts to look less like a neutral host and more like a speaker.
This shift is already colliding with high-risk use cases involving minors. Lawsuits against OpenAI and Character.AI allege chatbot outputs contributed to self-harm by teens-claims the companies deny. Meta has faced scrutiny after internal guidance suggested its chatbot could engage in "romantic or sensual" chats with teens; the company says those examples were erroneous, removed, and that new guardrails are in place.
Section 230's core rule-and where AI tests it
Section 230 was built to protect platforms from liability for what users post, not for what platforms themselves generate. Courts have long treated curation and organization of third-party content as "hosting," often granting immunity. Generative systems flip the script: transformer models produce new, individualized text.
That distinction matters. Extractive functions-like search snippets or feed ranking-have generally been treated as content neutral. Generative chat looks closer to authored speech. If a product's design predictably yields harmful output, the risk profile changes.
47 U.S.C. ยง 230 remains the anchor, but nothing in the text squarely addresses platform-authored AI output. Expect arguments to turn on whether the output is attributable to the platform or a neutral transformation of third-party material.
Design choices vs. moderation failures
Courts have often protected failures to remove third-party posts. But they draw tighter lines when the platform materially contributes to illegality, defects, or deception. Building a chatbot that can produce harmful content-especially for minors-looks less like passive hosting and more like an actionable design choice.
That's the heart of current pleadings: not "you didn't take down bad content," but "you built a system that creates it." If that framing sticks, Section 230 may not reach the conduct at issue.
Active litigation signals the strategy
Multiple suits accuse OpenAI and Character.AI of failing to protect minors from dangerous chatbot outputs. Both companies dispute the claims and point to expanded parental controls. Notably, in one case involving Character.AI, the defense reportedly did not invoke Section 230-an early signal that some defendants may see weak odds for 230 in generative contexts.
Meta's situation highlights a parallel risk: product guidance and guardrails. If internal rules allow unsafe interactions with teens, plaintiffs will argue foreseeability. Meta says it is adding restrictions, training systems to deflect sensitive topics with teens, and limiting teen access to certain AI characters.
Legislative pressure is building
Congress has taken notice. In 2023, Sen. Josh Hawley introduced the No Section 230 Immunity for AI Act to exclude generative AI from 230 protections; it was blocked after an objection by Sen. Ted Cruz. Even without a statutory change, litigants are already pressing the "platform-authored content" theory.
Courts have previously treated content-neutral algorithms that organize third-party material as non-publisher activity. Some defendants will argue that AI outputs are just neutral processing of inputs. Counterpoint from practitioners: if the content is generated by the platform's own code, it is the platform's speech.
What legal teams should do now
- Map exposure by function: Separate extractive features (search, ranking) from generative features (chat, auto-reply, content creation). Assume 230 arguments are weaker for the latter.
- Treat chatbots like products: Assess warnings, age gates, safe-use instructions, foreseeable misuse, and fail-safe design. Product liability, negligence, and unfair practices theories are all in play.
- Hard-code minors protections: Enforce strict age verification, topic blocks for self-harm and sexual content, escalation to expert resources, and default opt-outs for high-risk personas. Log and review edge cases.
- Document "content-neutral" filters: Where you rely on 230, show that systems curate third-party content without materially altering it. Keep design records and red-teaming reports to evidence intent and controls.
- Build incident response for AI output: Triage pathways for harmful prompts, rapid model updates, notice to affected users, and regulator engagement. Align with data retention and audit needs.
- Update contracts and insurance: Revisit vendor and API agreements (indemnities, safety SLAs, logging) and confirm coverage for AI-generated harms. Many policies exclude algorithmic output risk by default.
- Monitor the docket and agencies: Track federal and state cases involving chatbot harms, especially those with minors. Expect state AGs and consumer protection authorities to test theories beyond 230.
How courts may sort the questions
- Attribution: Is the output attributable to the platform or to a third party? Generative content points to the platform.
- Material contribution: Did design choices materially contribute to unlawful or harmful content?
- Neutrality: Are algorithms organizing existing content, or creating new statements?
- Foreseeability and safeguards: Were harms reasonably foreseeable, and were guardrails adequate-especially for minors?
The takeaway for counsel: do not assume Section 230 will carry generative AI cases. Center your defense on product safety, reasonable design, and documented controls. Build the record now.