Court Rejects Privilege Claim Over AI-Generated Documents
A federal court has drawn a bright line around privilege and public AI tools. In USA v. Heppner, Judge Jed S. Rakoff (S.D.N.Y.) ordered the production of 31 documents the defendant generated with Anthropic's Claude, despite later sharing them with counsel. The ruling, issued from the bench on February 17, 2026, offers early guidance on how privilege applies to AI outputs-and where it breaks.
Background
Bradley Heppner, former CEO of Beneficient, faces charges tied to alleged investor fraud and the misappropriation of more than $300 million from GWG Holdings. After his arrest, authorities seized devices containing prompts and outputs from Claude.
Defense counsel told prosecutors the AI documents included information learned from counsel and were created to help obtain legal advice. Counsel also conceded they neither directed the AI use nor participated in creating the documents.
Attorney-Client Privilege: Why It Failed
- No attorney involvement: Communications with an AI tool are not communications with a lawyer. The provider's terms of service disclaimed any attorney-client relationship and stated the tool does not provide legal advice.
- Not generated to obtain legal advice from a lawyer: The documents were made by the defendant and only later shared with counsel. An AI tool cannot give legal advice, and packaging one's thoughts for later discussion does not transform AI prompts/outputs into privileged communications.
- Lack of confidentiality: The AI tool's privacy policy allowed disclosure of user inputs and outputs to third parties, including government authorities. Any information derived from counsel was therefore disclosed to a third party, waiving privilege.
Judge Rakoff concluded there was no basis for attorney-client privilege over the AI materials, and sending them to counsel after the fact did not convert them into privileged communications.
Work Product: Also Out
The court rejected work product protection as well. Relying on In re Grand Jury Subpoenas (2d Cir. 2003), the government argued work product applies only to materials prepared by or at the direction of counsel in anticipation of litigation. Here, Heppner created the materials independently, without counsel's involvement.
The court held that independent research-even if later shared with a lawyer-does not qualify as protected work product. There is authority recognizing protection for materials prepared by or for a party under Federal Rule of Civil Procedure 26(b)(3)(A), and cases like Wultz v. Bank of China and U.S. v. Stewart have afforded protection in certain contexts. But this ruling focused on two anchors: lack of counsel direction and lack of confidentiality.
For reference, see FRCP 26(b)(3).
What This Means for Lawyers and Clients
- Preserve confidentiality: Avoid public models for sensitive matters. Use closed/enterprise deployments with contractual guarantees that inputs/outputs are not disclosed or used for training, and verify vendor privacy terms.
- Ensure counsel direction and involvement: If AI is used to assist legal advice, it should be at counsel's express direction. Where appropriate, prompts and project notes should reflect that direction and purpose.
- Limit dissemination: Control access to prompts and outputs. Set retention rules, restrict sharing, and keep audit trails.
- Draft privilege logs carefully: Explain the role of counsel, the litigation purpose, and the confidentiality safeguards. Be precise about who did what, when, and why.
- Proceed with caution: Do not paste privileged or sensitive facts into public tools. If clients insist on AI assistance, route them to sanctioned, private systems and provide written guardrails.
Practical Checklist for AI Use in Matters
- Confirm the model is private (no training on user data; no third-party disclosures).
- Document counsel's direction and the litigation purpose before any AI work begins.
- Keep AI outputs confined to the legal team; apply need-to-know access.
- Label working drafts appropriately (e.g., "Prepared at direction of counsel").
- Align vendor contracts with privilege and confidentiality requirements.
Looking Ahead
Expect more privilege fights over AI-generated materials. Outcomes will turn on jurisdiction, facts, the tool's privacy posture, and-critically-counsel's involvement. Public tools are a hard sell; enterprise platforms that maintain confidentiality and avoid training on user inputs may fare differently.
For policy frameworks, playbooks, and training focused on safe legal workflows with AI, see AI for Legal.
Your membership also unlocks: