SDNY's Heppner ruling: AI outputs aren't privileged or work product
On February 10, 2026, Judge Jed Rakoff (S.D.N.Y.) ruled from the bench that documents a criminal defendant generated with an AI tool and later shared with defense counsel were neither protected by attorney-client privilege nor the work-product doctrine. A written memorandum followed on February 17. The decision "appears to answer a question of first impression nationwide."
In United States v. Heppner, the defendant used Claude, an AI chatbot by Anthropic, to query issues tied to the government's investigation before his arrest. He later gave roughly 31 AI-generated documents to his attorneys. The government moved to compel; Judge Rakoff granted the motion, stating: "I'm not seeing remotely any basis for any claim of attorney-client privilege."
Attorney-client privilege: the baseline
The privilege protects confidential communications between attorney and client made for the purpose of obtaining or providing legal advice. Whether that extends to AI-generated materials had been an open question. Heppner draws a clear boundary.
Why privilege did not attach
- No attorney involved: Communications with an AI tool are not communications with counsel. As the court put it, no attorney-client relationship can exist between an AI user and a platform like Claude. Using a chatbot to research isn't consulting a lawyer.
- No confidentiality: Anthropic's privacy policy permits disclosure of user data to "governmental regulatory authorities" and "third parties." Disclosure risks like these undercut any reasonable expectation of confidentiality and waive privilege.
- No request for legal advice (from the AI): The tool disclaims providing legal advice. Even if the defendant planned to show outputs to his attorneys later, the relevant question is whether he sought legal advice from the AI. He didn't-and couldn't.
For background on privilege principles, see Cornell's overview of attorney-client privilege.
Work-product doctrine: court narrows the lane
The defendant argued the documents were prepared "in anticipation of litigation," and the doctrine protects materials prepared "by or for" a party. On that reading, client-created materials aiding a defense should qualify.
Judge Rakoff disagreed. The doctrine safeguards a "zone of privacy" for counsel to develop legal theories and strategy, with particular protection for attorneys' mental impressions. Second Circuit precedent emphasizes the lawyer's thought process. The court expressly declined to follow Shih v. Petal Card, Inc. (S.D.N.Y. 2023) to the extent it suggested broader protection for party-prepared materials without attorney direction.
Because Heppner "acted on his own" and the documents did not reflect or reveal counsel's strategy, work-product protection did not apply. For the core standard, see Fed. R. Civ. P. 26(b)(3) on work product.
What this means for legal teams
This ruling reaches beyond one case. Terms of service and privacy policies that allow disclosure to third parties-or government authorities-can defeat confidentiality. That logic likely applies across consumer-facing AI tools with similar terms.
Enterprise or contractual setups with stronger confidentiality commitments may fare differently, but the analysis will turn on the specifics: who controls the data, who can access it, and whether counsel directs the work with an eye toward litigation. Expect uneven outcomes until more courts weigh in, especially on work product.
Practical steps to preserve privilege and work product
- Consult counsel before using AI for litigation-related tasks: If investigations or litigation are on the horizon, loop in your lawyers first so any AI-supported work is scoped, directed, and documented by counsel.
- Review platform terms of service and privacy policies: If a provider can disclose data to third parties or authorities, assume confidentiality is compromised.
- Treat consumer AI interactions as discoverable: Don't input sensitive facts, strategy, or client confidences into public tools absent a plan approved by counsel.
- Update client onboarding and engagement letters: Address AI use explicitly. Set expectations on tools, data handling, and privilege risks.
- Document purpose and timing: If AI is used, record that materials were created "because of" anticipated litigation and at counsel's direction. Keep that record contemporaneous.
- Evaluate enterprise AI solutions: Prefer deployments with contractual confidentiality, audit logs, access controls, and data residency limits that support privilege arguments.
The road ahead
Generative AI won't rewrite core privilege rules. Heppner is the first judicial marker: AI outputs used independently by a client, on consumer terms, are unlikely to be privileged or protected work product. Other courts may diverge-especially on the breadth of party-prepared work product-but counsel-directed, confidential workflows will remain your strongest footing.
Firms and in-house teams should align on policy now: approved tools, engagement language, and a playbook for high-stakes matters. Doing this upfront reduces waiver risk and avoids messy discovery fights later.
Further resources
Your membership also unlocks: