Lawyers Tell Clients: Don't Use AI Chatbots for Legal Advice
US attorneys are warning clients that conversations with ChatGPT, Claude, and other AI chatbots can be demanded by prosecutors or opposing counsel in court. A federal judge's ruling this year made the threat concrete.
In February, US District Judge Jed Rakoff ordered Bradley Heppner, former chair of bankrupt financial services company GWG Holdings, to hand over 31 documents he created using Anthropic's Claude. Heppner had used the chatbot to prepare reports about his securities fraud case to share with his attorneys.
Prosecutors argued they had the right to the documents because Heppner's defense lawyers were not directly involved in the AI conversations. Rakoff agreed, writing that no attorney-client relationship "or could exist, between an AI user and a platform such as Claude."
The ruling exposed a gap in legal protection. Attorney-client privilege - which shields communications between lawyers and clients - does not extend to third parties. AI chatbots are third parties.
Law Firms Issue New Guidance
More than a dozen major US law firms have sent emails and posted advisories warning clients to be cautious with AI tools. Some have added language to client contracts stating that sharing a lawyer's advice with a chatbot could waive attorney-client privilege.
"We are telling our clients: You should proceed with caution here," said Alexandria GutiΓ©rrez Swette, a lawyer at New York-based firm Kobre & Kim.
Debevoise & Plimpton suggested clients include specific language in chatbot prompts if they use AI at a lawyer's direction. The firm recommended writing: "I am doing this research at the direction of counsel for X litigation."
O'Melveny & Myers and other firms noted that "closed" AI systems designed for corporate use might provide stronger protections than public chatbots, though they acknowledged this remains largely untested.
Conflicting Rulings Muddy the Picture
The same day Rakoff issued his ruling, a Michigan magistrate judge reached a different conclusion. Anthony Patti said a woman representing herself in an employment lawsuit did not have to hand over her ChatGPT conversations about her case.
Patti treated the AI chats as the woman's personal "work-product" - material she prepared for litigation - rather than communications with a third party. "ChatGPT and other generative AI programs are tools, not persons," Patti wrote.
The two rulings show courts are still working out how AI fits into longstanding legal protections. More decisions will likely follow as AI use in legal work grows.
Privacy Terms Offer No Safety Net
Both OpenAI and Anthropic state in their terms of service that they can share user data with third parties. Both also require users to consult a qualified professional before relying on their chatbots for legal advice.
Rakoff noted at a February hearing that Claude "expressly provided that users have no expectation of privacy in their inputs." This language is standard across public AI platforms.
The Practical Reality
Until courts establish clearer rules, the advice from attorneys remains consistent: treat AI chatbots like you would treat a stranger. Don't discuss your case with anyone except your lawyer.
Justin Ellis of New York-based law firm MoloLamken said more rulings will eventually clarify when AI chats can be used as evidence. Until then, the safest approach is the oldest one.
For legal professionals using AI for Legal work, understanding these boundaries is essential. The technology can assist with research and analysis, but the legal protections that apply to your work with a lawyer do not automatically extend to AI tools.
Your membership also unlocks: