When Employees Share Trade Secrets With AI, Companies Lose Legal Protection
Every time an employee pastes proprietary source code, a customer list, or a confidential business strategy into ChatGPT, Claude, or Google Gemini, they may be dismantling the legal protections that make those secrets worth protecting. The problem is no longer hypothetical-and right now, the burden of preventing it falls on employers.
The Legal Exposure
Under the federal Defend Trade Secrets Act and the Uniform Trade Secrets Act, adopted across most states, a company claiming trade secret protection must show it took reasonable measures to maintain secrecy. Confidentiality agreements, physical access controls, and employee training historically satisfied that requirement. Those safeguards were built for a different era-one of thumb drives and disgruntled employees, not one where an engineer can transmit an entire codebase to a third-party AI platform in seconds.
The issue runs deeper than whether a vendor actually uses inputs for training. Entering trade secrets into a public generative AI tool may itself threaten their protected status.
In February 2026, the U.S. District Court for the Southern District of New York addressed this directly in United States v. Heppner. The court held that attorney-client privilege did not extend to documents prepared using Claude and later shared with counsel. The reasoning: Anthropic's Privacy Policy permits sharing user data with third parties, meaning users "do not have substantial privacy interests" in communications with public AI platforms.
The trade secret implications are direct. A company that inputs trade secrets into a public AI tool-particularly one that cannot guarantee confidentiality-risks a court finding that it voluntarily disclosed that information to an outside party. That finding would be fatal to any subsequent trade secret claim.
Courts and opposing counsel will predictably apply Heppner's logic to trade secret litigation. Beyond litigation risk, employers must also contend with labor law constraints when crafting their response.
Labor Law Constraints on AI Policies
The National Labor Relations Board has made clear that overbroad workplace policies that could chill employees from discussing wages, working conditions, or collective activity are unlawful regardless of employer intent.
AI policies must be narrowly tailored to protecting legitimate business interests-specifically, trade secrets and proprietary information. A blanket ban on all AI tool use, or a sweeping confidentiality mandate that captures AI-generated content without limitation, can draw scrutiny if employees or unions argue the policy restricts protected activity. Employment counsel should review any policy before it goes live.
Building a Defensible Program
The "reasonable measures" standard does not require perfection, but it requires reasonableness in light of the company's circumstances and information value. The standard is evaluated at the time of alleged misappropriation-a policy adopted after a disclosure provides no retroactive protection.
Written AI Acceptable Use Policy. Identify categories of information that may not enter external AI platforms: source code, customer lists, financial projections, M&A targets. Distinguish between approved enterprise tools and consumer-facing tools. Require written employee acknowledgment at onboarding and annually.
Vendor Audit and Enterprise Agreement Review. Audit the terms of service and data processing agreements for every AI tool in use. Focus on whether the vendor retains training rights over inputs, what security certifications apply, and whether the enterprise product has adequate data isolation from the consumer version.
Technical Controls. Policies alone are insufficient. Data Loss Prevention tools configured to block uploads of sensitive data to unapproved platforms, network-level restrictions on consumer AI sites from corporate devices, and audit logging of AI tool use are the measures courts are most likely to credit.
Targeted, Documented Training. General confidentiality training is not adequate. Scenario-based training that concretely illustrates what kinds of prompts create risk and why should be delivered and documented.
Updated Employment and IP Agreements. Confidentiality and IP assignment agreements should expressly address AI for Legal contexts, making clear that trade secret obligations apply equally to disclosure through AI prompts, and that AI-generated outputs incorporating proprietary information remain company IP.
What Matters
Employee intent is largely irrelevant. The well-meaning engineer who debugged proprietary code using an unapproved AI tool has created the same legal problem as someone who deliberately exfiltrated data.
Companies that treat AI governance as a trade secret protection issue-not merely a technology policy-and build the vendor, technical, and training infrastructure to match will be better positioned both to protect their most valuable assets and to pursue trade secret claims if protection fails.
Your membership also unlocks: