Court rules defendant waives attorney-client privilege by using public AI tools without lawyer's direction

A federal judge ruled that a securities fraud defendant lost attorney-client privilege after using ChatGPT to research his defense without his lawyer's direction. The court found 31 AI-generated reports recoverable by prosecutors.

Categorized in: AI News Legal
Published on: Mar 31, 2026
Court rules defendant waives attorney-client privilege by using public AI tools without lawyer's direction

Federal Court Rules Using Public AI Tools Can Waive Attorney-Client Privilege

A federal judge in New York has ruled that a criminal defendant lost attorney-client privilege protections when he used ChatGPT and similar public AI platforms to discuss his legal defense without his lawyer's direction. The decision in U.S. v. Heppner, issued in February 2026, marks the first major court ruling on whether such AI use destroys confidentiality protections that typically shield attorney-client communications from discovery.

The defendant, facing securities fraud charges, entered multiple prompts into a public generative AI platform seeking to understand potential government prosecution strategies and defense options. He did this before his indictment, after receiving a grand jury subpoena, and without telling his attorney to do so.

When federal agents later searched his offices, they recovered 31 AI-generated reports. The defendant claimed privilege protections should prevent prosecutors from using these documents. The court disagreed.

Why the Court Rejected the Privilege Claim

The judge identified three problems. First, the reports were not communications with an attorney-they were exchanges with an AI system. Second, they were not confidential. The platform's terms of service explicitly stated that user inputs and outputs could be shared with third parties, including government authorities. Third, the defendant's attorney never directed him to use the AI tool, so the reports were not created for the purpose of seeking legal advice.

The defendant argued he had used information learned from his attorney to form his prompts and that he later shared the reports with counsel. The court found these facts did not change the outcome. Attorney-client privilege requires that communications remain confidential from the start-sharing them with counsel after the fact does not retroactively create protection.

What Happens Inside Public AI Platforms

When a user enters information into ChatGPT, Claude, Google Gemini, or similar public platforms, that data travels to remote servers operated by the AI company. Depending on the platform's policies, the information may be stored indefinitely, reviewed by human employees, used to train future AI models, or disclosed to third parties upon request.

This architecture creates the core risk: a client or attorney who inputs confidential legal strategy or case details into a public AI tool has no reasonable expectation that the information will remain private.

The Work Product Problem

The court also rejected claims under the attorney work product doctrine, which protects materials prepared by or for an attorney in anticipation of litigation. Work product protection covers an attorney's mental impressions, conclusions, and legal theories-but only when an attorney prepares or directs the preparation of the materials.

Because the defendant acted alone without his attorney's involvement, the AI-generated reports did not reflect counsel's mental impressions. They therefore fell outside work product protection.

What Attorneys Need to Do Now

Lawyers should explicitly address AI use in client engagement letters and communications. This means disclosing the risks of sharing attorney communications or defense strategy with public AI platforms.

Attorneys should also consider whether use of public AI tools violates professional responsibility rules. Rule 1.1 requires technological competence. Rule 1.6 prohibits unauthorized disclosure of client information. Using public AI platforms without appropriate safeguards may breach both.

Before deploying any AI tool, attorneys should ask whether it is necessary, whether it risks exposing client secrets, and whether they have obtained informed consent from the client. In many cases, the safer approach is to use closed, legally trained AI platforms that protect confidentiality rather than public systems.

Clients should not use public AI tools to discuss legal matters, strategy, or defense approaches without explicit direction and approval from their attorney. Even then, the risks may outweigh the benefits.

Learn more about AI for Legal professionals and the specific risks of tools like ChatGPT in legal practice.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)