Chatting With AI Can Waive Attorney-Client Privilege

Ask a chatbot about a case, and you might hand over your strategy. A New York ruling shows prompts can lose privilege-use firm-controlled, private tools or keep sensitive facts out.

Categorized in: AI News Legal
Published on: Feb 14, 2026
Chatting With AI Can Waive Attorney-Client Privilege

Better Call Claude? The Confidentiality Risks Of Getting Legal Advice From AI

AI tools are fast, persuasive, and getting better by the week. Many people now ask chatbots for help on legal questions before they call a lawyer-or after, to sanity-check advice.

That speed comes with a cost. If you feed facts about a client or case into a consumer chatbot, you may be handing adversaries a roadmap to your strategy. Recent developments out of New York make that risk concrete.

Privilege 101: What's Protected-and What Isn't

The attorney-client privilege protects confidential communications for the purpose of getting legal advice. If a client speaks candidly to counsel, those conversations are typically off-limits to adversaries. See a plain-English primer here: Attorney-Client Privilege (LII).

The work product doctrine shields materials prepared by or for a party in anticipation of litigation. Client-created notes or drafts meant to assist counsel can qualify-if they're kept confidential.

Key point: these protections hinge on confidentiality. Bring in a third party, and you risk waiver.

Where AI Breaks Things

Consumer chatbots are not your law firm. Their standard terms often say prompts may be logged, used to improve models, and in some cases shared with service providers or regulators. If that's true, you've disclosed to a third party. Privilege can evaporate.

Even if you meant to summarize thoughts for counsel, using a tool with no real expectation of privacy can undermine both attorney-client and work product protections.

A Cautionary Case: U.S. v. Heppner

After a federal investigation began, a defendant used an AI chatbot to draft materials about his defense and later shared them with his lawyers. Agents later seized devices containing those AI-assisted drafts. Prosecutors argued the files weren't privileged.

The court agreed. Based on the vendor's then-current terms of service-collecting user prompts, training on them, and possible disclosure to third parties-the judge found no reasonable expectation of privacy. Result: the government could use the prompts and outputs at trial, and effectively glimpse defense strategy.

Takeaway: if the tool's default settings defeat confidentiality, privilege arguments will be an uphill climb.

Practical Moves For Legal Teams

  • Assume prompts are discoverable with consumer chatbots. Don't put client identities, case facts, or strategy into public tools.
  • Read the terms and toggle privacy controls. If the vendor can use prompts for training or share them, treat the tool as a third party. Disable training and logging where possible.
  • Use legal-grade or enterprise AI under contracts that prohibit data use for training, define confidentiality, and provide audit/security assurances (SOC 2, ISO 27001). Prefer on-prem, VPC, or zero-retention modes.
  • Channel AI use through counsel. If AI is needed for research or drafting, have the law firm run it under firm controls, not the client on a personal account.
  • Set a written AI policy covering permitted tools, data classification, redaction rules, retention, and approval workflows.
  • Redact and abstract when exploratory use is unavoidable. Strip names, dates, and unique fact patterns that could identify the matter.
  • Label and segregate work product. Store AI-related drafts in privileged folders with limited access and clear privilege markings.
  • Coordinate eDiscovery early. Know what the AI vendor logs, where it's stored, and how to preserve or exclude it.
  • Train your team and clients on privilege, privacy settings, and safe prompt practices. Revisit as tools and policies change.
  • Mind professional duties on confidentiality and tech competence. See ABA Model Rule 1.6.

Bottom Line

AI can speed research and drafting, but privilege isn't AI-proof. Treat public chatbots like any other third party. If confidentiality matters-which it does-move to enterprise or legal-specific tools under the firm's control, lock down data use, and keep sensitive prompts out of consumer apps.

If your team needs structured training on safe, practical AI use, consider this resource: AI Certification for Claude.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)