SDNY: AI-Created Documents Shared with Counsel Aren't Privileged
A federal judge in New York said documents a Texas financial services executive generated with an AI tool and then sent to his attorney are not protected by attorney-client privilege or the work-product doctrine. The comments came ahead of an April 6 trial for former Beneficient CEO Bradley Heppner, who faces charges tied to GWG Holdings, including fraud and lying to auditors, according to Law360.
In 2025, Heppner used an AI system to prepare 31 documents related to his legal case and shared them with defense counsel at Quinn Emanuel Urquhart & Sullivan. Prosecutors asked the court to reject privilege and work-product protection over those materials. U.S. District Judge Jed S. Rakoff stated he saw no basis for privilege, noting the AI provider's terms specify that inputs are not confidential and users should have no expectation of privacy.
The takeaway is blunt: if a client or lawyer feeds case facts into an AI tool that treats inputs as non-confidential, that disclosure can defeat privilege and undercut work-product protection. Contract terms and data handling practices are doing the heavy lifting here.
Why this matters for legal teams
- Using consumer or "public" AI tools with terms that allow human review, logging, or model training can amount to disclosure to a third party.
- Privilege depends on confidentiality. If the vendor reserves rights over your inputs, expect a challenge.
- Work-product arguments get weaker when a third party can access prompts, drafts, or usage data.
Immediate steps to reduce privilege risk
- Use enterprise AI under written agreements that bind the vendor to confidentiality, disable model training, and limit retention. Get data-processing and subprocessor terms in writing.
- Prohibit staff and clients from entering identifiable facts into personal or free chatbot accounts. Route sensitive work through firm-controlled systems.
- Scrutinize terms for "non-confidential," "training," "human review," "telemetry," and "sharing." If you see them, treat the tool as non-privileged.
- Sanitize prompts. Use hypotheticals or anonymized facts when privilege is a concern.
- Update engagement letters and outside counsel guidelines to disclose and govern AI use, including privilege expectations.
- Address AI in your ESI protocol: include prompts, outputs, logs, and vendor data; seek a Rule 502(d) order for inadvertent disclosures.
- Maintain an AI usage register for matters: tools used, settings, data retention, and who had access.
What remains unsettled
- Whether privilege can extend to AI systems when the provider acts as the lawyer's agent under strict confidentiality, especially on-prem or private instances.
- The scope of work-product protection for AI-assisted analysis created at a lawyer's direction inside a closed, non-retentive environment.
- How courts will treat ephemeral processing versus stored prompts, error logs, and telemetry.
Action checklist before your next AI prompt
- Confirm: Is this a firm-controlled, enterprise instance with no training and strict retention? If not, don't input client facts.
- Get the vendor's confidentiality and retention terms in the contract, not just the marketing page.
- Lock down SSO, access controls, and audit logs. Disable data sharing and human review features.
- Keep AI drafts and prompts segregated. Assume discoverability unless you can prove confidentiality.
- Train your team and clients. One stray input can waive a privilege argument for the entire thread.
Key facts from the Heppner matter
- 31 AI-prepared documents were shared with defense counsel.
- The court indicated no privilege or work-product protection applies, in part because the AI tool's terms disclaim confidentiality.
- Trial set for April 6; charges include fraud and lying to auditors linked to GWG Holdings and Beneficient.
Related resources
- Attorney-Client Privilege (LII)
- FRE 502: Attorney-Client Privilege and Work Product; Limitations on Waiver (LII)
- AI for Legal
Your membership also unlocks: