Federal Court Rules AI Chatbot Exchanges Aren't Privileged
A federal judge in New York has rejected a defendant's claim that his conversations with a publicly available AI platform qualify for attorney-client privilege or work product protection. The ruling in United States v. Heppner establishes that traditional legal privilege rules apply directly to generative AI tools, and that using commercial chatbots for sensitive legal work carries serious disclosure risks.
The defendant, a corporate executive facing securities fraud and wire fraud charges, used a commercial AI platform to draft defense strategy documents after his arrest. He created roughly 31 documents outlining potential legal approaches, then shared them with his attorneys. The government seized the materials during a search warrant execution and moved to exclude them from any privilege claim.
The court granted the motion on three grounds.
The Privilege Analysis
Courts recognize attorney-client privilege only when three conditions are met: the communication involves a client and attorney, it remains confidential, and the client sought legal advice. The AI documents failed all three tests.
No attorney-client relationship exists with an AI system. The court rejected comparisons to neutral tools like cloud storage, noting that recognized privileges depend on a trusting relationship with a licensed professional. An AI platform cannot fill that role.
The communications were not confidential. The platform's privacy policy explicitly stated it collects user inputs and outputs, uses that data to train its model, and reserves the right to share data with third parties, including government agencies. The defendant had no reasonable expectation of privacy. By entering previously privileged information into the platform, he waived any privilege that information once held.
This aspect of the ruling hinges on the specific platform involved. Subscription-based tools designed for legal work-such as Lexis+ AI, Westlaw Precision/CoCounsel, and Harvey-operate under contractual terms that prohibit using client data for model training and restrict third-party disclosure. The court's confidentiality analysis would likely differ for platforms with those protections.
The defendant did not seek legal advice from the AI. His own defense counsel conceded they had not directed or suggested his use of the platform. The AI tool's disclaimers-stating it is not a lawyer and cannot provide formal legal advice-reinforced this conclusion. Communications do not become privileged simply because they are later shown to an attorney.
Work Product Doctrine Failed Too
Even if the defendant had created the documents in anticipation of litigation, the court held they were not protected work product. The defendant prepared them on his own initiative, not "by or at the behest of counsel." More critically, they did not reflect counsel's existing strategy or mental impressions at the time of creation. Work product protection requires materials prepared under attorney direction and revealing attorney thinking.
What This Means for Organizations
The decision clarifies several principles for anyone handling sensitive matters:
- Privilege requires a human attorney-client relationship. No AI tool can substitute for that relationship.
- Entering privileged information into a publicly available AI platform may waive existing privilege over that information.
- Client-created AI outputs rarely qualify as work product unless prepared under counsel's direction and genuinely reflecting counsel's mental processes.
For communications and PR professionals advising on litigation or investigations, the practical steps are clear:
- Educate employees and clients not to input legal advice, litigation strategy, or confidential assessments into public AI platforms. If AI tools are necessary, use subscription-based platforms designed for legal work that contractually guarantee data confidentiality and prohibit model training on client data.
- Before using any AI tool for sensitive matters, review the platform's privacy policy and data-handling practices. Confirm whether the platform contractually commits to maintaining confidentiality and refrains from using data for model training or third-party disclosure.
- If AI is used to assist legal tasks, structure that use under counsel's direction and supervision so resulting materials can be tied to counsel's mental processes.
- Implement corporate policies governing employee use of generative AI in connection with investigations, regulatory interactions, or litigation.
- When responding to investigations or discovery, identify any AI-generated content in seized or collected data sets and plan review protocols accordingly. Privilege claims over such materials will face substantial obstacles.
The ruling does not prohibit AI use in legal contexts. It simply establishes that the choice of tool matters enormously. A platform's contractual commitment to confidentiality and data protection is now a threshold question for any organization handling sensitive legal work.
For more on how AI intersects with legal practice, see our resources on AI for Legal and the AI Learning Path for Paralegals.
Your membership also unlocks: