Federal court rules AI chatbot conversations are not protected by attorney-client privilege or work product doctrine

A federal judge ruled that employee chats with AI platforms like Claude carry no attorney-client privilege and are fully discoverable in court. The February 2026 decision is the first major ruling on AI conversations and litigation.

Categorized in: AI News PR and Communications
Published on: Mar 28, 2026
Federal court rules AI chatbot conversations are not protected by attorney-client privilege or work product doctrine

Your AI Chatbot Conversations Aren't Privileged-And Courts Will Compel Them

A federal judge in New York ruled that employees who use publicly available AI platforms to discuss legal strategy, draft litigation documents, or explore litigation positions cannot rely on attorney-client privilege or work product protection. The conversations are discoverable.

The ruling came in United States v. Heppner, where Judge Jed S. Rakoff examined whether a corporate executive's chats with Claude, an AI platform made by Anthropic, deserved legal protection. They did not. The decision signals that any business using commercial AI tools to handle sensitive legal matters faces real litigation risk.

What Happened in the Case

Bradley Heppner, a corporate executive charged with securities fraud, used Claude to organize his thinking about potential defense strategies after receiving a grand jury subpoena. He later shared these conversations with his lawyers, who argued they should be protected from disclosure.

The government moved to compel production. On February 10, 2026, Judge Rakoff granted the motion and explained why the documents deserved no protection.

Why Attorney-Client Privilege Failed

Attorney-client privilege protects confidential communications between a client and attorney made to obtain legal advice. The court found Heppner's AI conversations failed on multiple grounds.

Claude is not an attorney. No attorney-client relationship existed between Heppner and the AI platform. The privilege cannot apply without that foundational relationship.

The communications were not confidential. Anthropic's privacy policy-which users accept when signing up-states the company collects user inputs and outputs, uses that data for training, and reserves the right to disclose it to third parties including government authorities. Heppner had no reasonable expectation of confidentiality.

Heppner did not seek legal advice from Claude. He used the platform on his own initiative, not at his lawyer's direction. Claude itself disclaims providing legal advice. Sharing the conversations with his attorney afterward did not transform them into privileged documents.

Why Work Product Doctrine Failed

The work product doctrine protects materials prepared by or at the direction of counsel in anticipation of litigation. Heppner's documents failed for two reasons.

First, his counsel did not direct him to use Claude. He acted on his own. The Second Circuit consistently requires that work product be prepared at counsel's behest, not independently by the client.

Second, the documents did not reflect his counsel's strategy or mental processes at the time they were created. While they may have influenced strategy later, they captured Heppner's thinking, not his lawyer's.

Five Practical Steps for Your Organization

1. Treat all AI conversations as discoverable. If employees use ChatGPT, Gemini, Claude, or similar platforms to discuss legal problems or litigation strategy, assume those conversations can be obtained by opposing parties in civil cases or by government agencies in criminal proceedings. Do not input sensitive information you would not share with a third party.

2. Have counsel direct AI use. Judge Rakoff noted that the outcome might differ if a lawyer explicitly directed an employee to use an AI tool as part of the attorney's analytical process. When AI tools are used under counsel's supervision and direction, there is a stronger argument for protection. Document these directives.

3. Check the privacy policy before using any platform. The court's analysis hinged on Anthropic's privacy terms. Enterprise or API versions of many AI platforms offer different data handling than consumer products. Review the terms before using any tool with sensitive business or legal information. Choose platforms with appropriate data protection commitments.

4. Create a clear AI usage policy now. Heppner is the first major court decision on this issue. More will follow. A policy should include:

  • Employee education that AI conversations may not be private, may not be privileged, and may be subject to discovery
  • Approval lists for which AI tools employees can use with litigation-sensitive matters
  • Clear distinctions between enterprise tools with appropriate data handling and consumer products with broad disclosure rights
  • A requirement that any AI-assisted legal work be conducted at counsel's direction and documented as such
  • Litigation hold procedures triggered at the earliest sign of anticipated litigation, expressly covering AI documents and conversation logs

5. Treat AI governance as litigation readiness, not compliance theater. Organizations that check the box on AI policy without addressing the underlying risks do so at significant legal cost.

For PR and communications teams, this ruling carries direct implications. If your organization uses AI tools to draft strategy, analyze positions, or work through communications scenarios related to legal matters, those conversations are not protected. Brief your teams accordingly.

Learn more about AI for Legal and AI for PR & Communications to understand how to implement compliant AI practices in your organization.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)