Delaware Court Rules AI Chatbot Advice Not Protected by Lawyer-Client Privilege
A Delaware court ruled against South Korean game company Krafton in a dispute over performance bonuses, citing the CEO's conversations with ChatGPT as decisive evidence. Vice Chief Judge Lori Will found that Krafton devised an unfair dismissal strategy based on AI advice to avoid paying executives from an acquired studio approximately 3.7 billion Korean won in bonuses.
The ruling exposes a critical gap in corporate legal protection: conversations with AI tools fall outside attorney-client privilege, unlike discussions with lawyers.
How the Bonus Dispute Unfolded
Krafton acquired Unknowm Worlds, developer of the survival game Subnautica, in 2021. The deal included a guarantee to pay up to $250 million in performance bonuses if the sequel exceeded sales targets. In May 2025, internal projections showed a $200 million payout was likely.
Two months later, Krafton dismissed the CEO and two other executives, claiming they attempted to release the game in an unfinished state. The fired executives sued, arguing the dismissals were pretextual-designed solely to avoid the bonus obligation.
The court agreed. Internal records showed CEO Kim Chang-han consulted ChatGPT after a company executive warned him that dismissing the team would not eliminate the bonus obligation and would increase litigation risk.
What ChatGPT Recommended
Kim initially received the caution he expected from ChatGPT. But after posing leading questions, the AI suggested forming an internal task force to renegotiate the bonus or forcibly acquire the company. It generated a "Response Strategy to a No-Deal Scenario" document.
Kim created a task force called "Project X" and implemented most of ChatGPT's recommendations before dismissing the executives. During trial, he claimed he used the chatbot "like any other search engine to explore options." The court rejected this defense.
The Evidence Trail
Kim shared his ChatGPT conversations via Slack and email. Under U.S. discovery rules, litigants must submit electronic records requested by the opposing party. Slack messages, emails, and memos are all fair game.
The court found that Kim had deleted portions of the relevant ChatGPT dialogues. He explained he removed them because OpenAI might use the information for training purposes. Plaintiffs countered that other messages from the same period remained intact, suggesting selective deletion.
This selective removal strengthened the case against Krafton. Courts can rule immediately against parties that intentionally withhold or delete evidence.
No Privilege for AI Conversations
The ruling confirms that corporate executives cannot expect confidentiality when consulting AI tools. Typically, sensitive decisions-restructuring, mergers, hostile actions-are discussed with internal legal teams or external law firms. Those conversations are protected by attorney-client privilege under U.S. law and cannot be disclosed in court.
AI chatbots receive no such protection. A February ruling by the U.S. District Court for the Southern District of New York ordered former GWG Holdings chairman Bradley Heppner to submit 31 documents created using Anthropic's Claude to prosecutors. The court stated: "No attorney-client relationship exists or can exist between an AI user and a chatbot platform."
How Law Firms Are Responding
Major U.S. law firms are now issuing guidance to clients. ShutreMonte Law Firm in New York warned that sharing legal advice or discussions with lawyers via AI chatbots could void legal protections.
Debevoise & Plimpton posted guidelines recommending that clients include a specific prompt when using AI for case-related work: "This investigation is being conducted under the direction of the litigation counsel." The intent is to establish a clearer legal relationship and potentially strengthen privilege claims.
One industry insider offered blunt advice: "If you're using chatbots to devise hostile takeover strategies, assume opposing counsel will read every word."
What This Means for Executives
The ruling creates a stark choice for AI for Executives & Strategy professionals. AI tools offer speed and breadth when exploring options. But any sensitive decision-especially those involving litigation, restructuring, or financial obligations-should remain in conversations with lawyers, not chatbots.
Kraft's case demonstrates that using AI to devise or refine strategies that later become legally contentious leaves a discoverable record. The CEO's documented interactions with ChatGPT became the smoking gun proving intent.
Executives should treat AI conversations the way they treat email: assume they will be read in court. For strategy work with legal implications, the safer path remains the traditional one-a conversation with counsel.
Your membership also unlocks: