Can you trust AI to keep your secrets? For now, assume the answer is no
AI chatbots are becoming confidants. People ask for life advice, vent about work, and even test legal strategies.
Here's the problem: those chats don't carry the legal shields you're used to. No attorney-client privilege. No doctor-patient confidentiality. No spousal privilege. If it matters in a dispute or investigation, it can be discoverable.
There's no privilege in a chatbot
As one AI-focused partner, Juan Perla, put it: courts can get to these records if they're relevant. That alone should change how your clients and your teams interact with AI tools.
Even Sam Altman has acknowledged the gap: talk to a therapist or a lawyer and you have legal protections; talk to a chatbot and you don't. Until the law catches up, treat AI chats as records, not sacred conversations.
Discovery and subpoenas: what's on the line
Think civil discovery, regulatory inquiries, criminal investigations. If a chat thread touches a workplace dispute, divorce, custody, internal complaint, or potential criminal exposure, expect requests for production.
Anonymizing or speaking "hypothetically" won't guarantee safety. If the facts line up, a court can still find those records relevant.
Practical guidance for legal professionals
- Advise clients: do not feed privileged, sensitive, or identifying details into public or consumer AI tools.
- Adopt a firmwide AI policy: approved tools, banned inputs, review steps, and escalation paths.
- Use enterprise options with data controls: no training on your data, configurable retention, audit logs, SSO, and DPA.
- Turn on "no logging" or auto-delete settings where available; verify in the contract, not just marketing pages.
- Redact aggressively: remove names, dates, account numbers, unique fact patterns, and file identifiers.
- Never paste evidence, client communications, strategy, or draft pleadings that reveal case posture.
- For high-sensitivity work, consider self-hosted or private instances where you control storage and logs.
What to check in an AI tool's terms
- Data retention and deletion: default retention, admin controls, and verifiable deletion timelines.
- Training use: is your data used to train models? Can you opt out by contract?
- Storage location: data residency and cross-border transfer mechanics.
- Security: encryption, access controls, audit trails, SOC 2/ISO 27001 claims with current reports.
- Legal process: how the provider handles subpoenas, notice to customers, and ability to challenge requests.
- Enterprise agreement: DPA, confidentiality terms, and breach notification obligations.
Client scenarios to flag early
- Employment disputes: harassment complaints, performance write-ups, internal investigations.
- Family matters: custody, marital communications, financial disclosures.
- Criminal exposure: anything touching intent, timelines, or admissions.
- Regulatory risk: health data, financial data, export controls, or protected categories.
A simple rule your clients will remember
If you'd only tell a lawyer, doctor, or spouse, don't type it into a chatbot. If you still feel the urge, stop and call counsel.
A lightweight policy you can deploy this week
- Allowed uses: research, generic drafting, and ideation with redacted facts only.
- Prohibited inputs: client identities, unique fact patterns, evidence, strategy, and nonpublic financial/health data.
- Approved tools: list them; block the rest at the network level.
- Settings: disable training on your data; enforce minimal retention.
- Review: attorney approval before any AI-assisted content leaves the firm.
Where AI can still help (safely)
Use it for templates, checklists, neutral summaries, or tone edits-without client specifics. Keep it in a sandbox. Treat outputs as drafts. You're still the filter and the liability backstop.
For background on privilege and discovery scope, see the ABA's overview of attorney-client privilege here and FRCP 26(b)(1) on relevance and proportionality here.
If you're formalizing training for your team on safe, practical AI use, you can browse role-based options here.
The takeaway is simple: AI is useful, but it isn't your confidant. Treat chatbots like a public record that hasn't gone public-yet.
Your membership also unlocks: