Courts rule client AI use falls outside attorney-client privilege in dueling federal decisions

A federal judge ruled that documents created with public AI tools like Claude carry no attorney-client privilege. Courts cite data-sharing policies as grounds to strip confidentiality protections.

Categorized in: AI News Legal
Published on: Apr 09, 2026
Courts rule client AI use falls outside attorney-client privilege in dueling federal decisions

Federal Courts Reject Privilege Claims for Client Use of Public AI Tools

A federal judge in New York has ruled that criminal defendants cannot claim attorney-client privilege for documents created using publicly available artificial intelligence platforms, even when those documents are later shared with their lawyers. The decision signals that courts will scrutinize how clients use AI in legal matters and may strip away confidentiality protections based on the tool selected and how it was deployed.

In United States v. Heppner, decided in February 2026, Judge Rakoff rejected privilege claims for materials a defendant generated using Claude, an AI chatbot made by Anthropic. The defendant had created the documents independently, without instruction from counsel, and only shared them afterward.

The court's reasoning was direct: attorney-client privilege protects communications with attorneys or their agents. An AI tool is neither. The defendant was not seeking legal advice from Claude-Claude's terms of service explicitly disclaim providing legal counsel and instruct users to consult qualified lawyers.

More significantly, Judge Rakoff found the defendant had no reasonable expectation of confidentiality. Claude's privacy policy states that user inputs and outputs are used to train the system and may be disclosed to third parties. That disclosure risk eliminated any basis for privilege protection.

The ruling applies not only to AI-generated outputs but also to information fed into the system. Sharing summaries of legal advice, factual narratives prepared for litigation, or draft legal theories with a public AI platform can waive privilege over both the input and any conclusions the tool generates.

A Narrower Ruling Elsewhere

Courts are not unanimous on the issue. The District Court for the District of Colorado reached a different conclusion in Morgan v. V2X, Inc., decided in 2026. That court allowed a self-represented litigant to claim work-product protection for AI-assisted litigation materials, finding that using an AI platform did not automatically waive protection.

The Colorado court distinguished its case from Heppner on a critical point: the litigant was proceeding without counsel, so there was no gap between party and advocate. Still, the court required changes to the protective order to bar parties from submitting confidential information to public AI platforms unless the vendor contractually agreed not to use or disclose the data and could delete it on request.

Both decisions align on one point: public AI tools pose confidentiality risks, and courts will not treat AI use as categorically protected.

What Organizations Should Do

The decisions establish that privilege protection turns on specific facts: the doctrine invoked, the user's role, and the safeguards around the platform. Organizations and their counsel should act on several fronts.

  • Establish clear policies governing AI use in legal matters and train staff on the risks of waiving confidentiality protections.
  • Require employees to consult legal counsel before using any AI tool for legal work.
  • If using AI platforms, review their terms of service and privacy policies for data retention and third-party disclosure practices.
  • Consider implementing contractual protections with AI vendors to limit data use and require deletion on request.
  • Distinguish between public AI platforms and proprietary tools designed specifically for legal work with stronger confidentiality safeguards.

In-house counsel should train internal clients to avoid unsupervised AI use. Outside counsel should remind clients that sharing potential legal materials with third parties-including AI tools-can destroy privilege.

The courts have made clear they are watching how AI is used in legal contexts. Careless deployment can expose sensitive information and undermine protections that took decades to establish.

For more guidance, see our resources on AI for Legal and Generative AI and LLM applications.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)