Two Courts Reach Opposite Conclusions on AI-Generated Deal Documents
Courts are now deciding whether generative AI communications used during M&A transactions qualify for legal privilege - and they're not agreeing. The conflicting rulings create immediate risk for deal teams who use consumer AI tools without clear protocols.
In February 2026, a federal judge in New York ruled that a former CEO's Claude AI conversations about his legal defense were not protected from disclosure. A week earlier, a Michigan judge said a plaintiff's ChatGPT-generated litigation documents were protected work product. The cases highlight how courts view AI tools differently and expose a gap between how lawyers use AI and how courts will treat those communications in litigation.
The New York Case: No Protection Without a Lawyer
Bradley Heppner, former CEO of Beneficient Company Group, used Anthropic's Claude platform to generate 31 documents analyzing his legal exposure and defense strategies after his indictment. He had hired a lawyer, but did this analysis independently without counsel's involvement.
Judge Rakoff denied protection on three grounds. First, Claude is not an attorney and cannot establish an attorney-client relationship. Second, Anthropic's terms of service state the company collects user inputs and outputs for model training and reserves the right to share data with third parties - meaning Heppner had no reasonable expectation of confidentiality. Third, Heppner wasn't seeking legal advice from counsel; he was using a consumer AI tool on his own.
The court also rejected work product protection because the documents weren't prepared by or at the direction of his lawyer.
The Michigan Case: AI as a Tool, Not a Third Party
Days before the Heppner decision, a Michigan court reached the opposite conclusion. A plaintiff representing himself in an employment discrimination case used ChatGPT to draft litigation materials. When the defendant moved to compel production of those AI-generated documents, the court refused.
The judge treated ChatGPT as a tool - like a word processor - rather than a third party. Disclosing materials to a tool doesn't waive protection because it doesn't increase the risk of an adversary obtaining them. Since the plaintiff was representing himself, the AI documents reflected his mental impressions, which qualify for work product protection.
The Core Difference: Lawyer Involvement
The cases turn on whether counsel directed the AI use. In Heppner, the lawyer was absent. In Michigan, the plaintiff himself was the lawyer. Neither case involved a deal team using AI under an attorney's supervision - a common M&A scenario that remains legally uncertain.
The judges also disagreed on what AI is. Heppner treated it as a third party that undermines confidentiality. Michigan treated it as a tool that doesn't. This split will likely persist across jurisdictions.
What This Means for Deal Teams
The cases expose real litigation risk. Sellers don't want their AI prompts about disclosure obligations used to prove they knowingly misled buyers. Buyers don't want due diligence summaries used to argue they had knowledge that undermines earnout claims. Neither side wants AI-generated financial projections to contradict deal metrics in post-closing disputes.
Courts haven't settled whether AI communications are discoverable in M&A litigation, and deal teams can't wait for clarity. The law will develop slowly. The risk exists now.
Practical Steps for Deal Teams
Establish clear AI policies. Document how your deal team should use AI. Distinguish between consumer platforms like ChatGPT and enterprise tools with different confidentiality terms. Educate everyone - lawyers, bankers, advisors - about discoverability risks before they start using AI on deal materials.
Modify standard contract language. Add provisions that explicitly define "AI-Assisted Deal Materials" to include prompts, outputs, and conversation logs. Have both parties acknowledge these materials are confidential work product prepared for legal purposes, regardless of what the AI platform's terms of service say. Include a covenant that the buyer won't seek access to, assert waiver of, or use any AI-generated deal materials in future disputes.
Control document retention. Extend existing practices for privileged deal communications to AI outputs. Determine whether AI-generated materials can be exported from company systems or platform accounts. Treat them the same way you treat privileged emails - keep them off standard backup systems and document retention schedules.
Use enterprise tools when possible. Consumer AI platforms like ChatGPT and Claude have terms of service that explicitly allow data collection and government disclosure. Enterprise versions often offer different protections. The cost difference is minimal compared to litigation risk.
The Unsettled Legal Ground
These two cases won't be the last word. More courts will address AI and privilege as deals close and disputes arise. But waiting for the law to settle leaves your deal exposed. The time to build protection is before you hit send on that prompt.
For communications professionals involved in deal processes, this means understanding how your legal and business teams are using AI - and flagging risks early. If your team is using consumer AI tools for deal analysis without protocols or contracts, that's a conversation to start now.
Your membership also unlocks: