Copilot Bypassed DLP to Read Confidential Emails - Urgent Lessons for Law Firms

A Copilot bug let chat summaries pull in emails marked confidential, even drafts and sent items. For firms, that means legal risk-so gate AI, test labels, and document controls.

Categorized in: AI News Legal
Published on: Feb 22, 2026
Copilot Bypassed DLP to Read Confidential Emails - Urgent Lessons for Law Firms

Microsoft's Copilot Read Confidential Emails - What That Means For Your Firm

Microsoft confirmed a bug in Microsoft 365 Copilot Chat that let the assistant summarise emails explicitly labelled as confidential, including Drafts and Sent Items. The flaw, tracked as CW1226324 and first detected on 21 January 2026, bypassed data loss prevention (DLP) controls in the "work tab" chat feature.

Microsoft says the issue is fixed and that no one saw data they weren't already authorised to access. Still, the assistant's access to protected content was unintended - and that's the point. In law, intent doesn't patch privilege after the fact.

The Legal Exposure

Client confidentiality, legal professional privilege, and GDPR obligations assume your safeguards work. If a vendor bug weakens those controls, regulators and clients will ask whether your firm exercised due care in deploying and monitoring AI tools.

Microsoft's own documentation notes that sensitivity labels don't behave consistently across all Microsoft 365 apps. Buried caveats won't satisfy a client when privileged threads are ingested by an AI system without clear consent or notice. See Microsoft's guidance on sensitivity labels for scope and limitations: Microsoft Learn: Sensitivity labels.

The timing matters. The European Parliament's IT department reportedly paused built-in AI features on staff devices over data exposure concerns. The UK's NHS flagged the Copilot label-bypass on its internal portal. These are conservative operators with high-stakes data - exactly where AI missteps carry legal consequences.

Expert Warning: This Is Only The Beginning

Security experts are blunt. Dr. Ilia Kolochenko, CEO of ImmuniWeb and a Fellow at the European Law Institute, warns that agentic AI incidents will surge in 2026, becoming a frequent class of security event across companies of all sizes. Traditional DLP and access controls struggle to detect excessive or unintended AI use by employees or insiders.

The threat is two-sided: misuse by your own people and malicious AI agents engineered to siphon data. As Kolochenko puts it, vast amounts of sensitive personal data are shared with LLMs daily without precautions - including within government. "Shadow AI" - staff using personal devices and apps to process confidential material - is now a frontline risk to privilege and regulatory compliance.

Litigation On The Horizon

Expect class actions and individual lawsuits alleging unlawful collection and processing of data by AI vendors and integrators. Some actors will claim inadvertent collection by autonomous agents; that defence hasn't been stress-tested in court at scale.

If a few large incidents hit critical providers or leak classified data, regulators will move fast. Over-correction is possible. That could mean severe constraints on enterprise AI and a chilling effect across vendors.

Action Plan For Law Firms (Start This Week)

  • Map AI exposure: Identify every AI tool touching email, DMS, chat, and matter files - official deployments and bring-your-own apps. Include mobile and personal devices.
  • Test your labels and DLP against AI: Don't assume legacy DLP rules apply to AI assistants. Build test cases for Drafts, Sent Items, shared mailboxes, and delegated access.
  • Gate Copilot and similar tools: Restrict "work tab" and cross-source summarisation until retested. Use pilot groups, dedicated test tenants, and explicit opt-ins.
  • Tighten sensitivity labels: Default to "Confidential" for client mailboxes and matter workspaces. Require justification and logging to downgrade or override.
  • Contractual cover: Update client engagement terms and privacy notices to disclose AI processing. Add AI-specific DPAs, audit rights, incident SLAs, and data residency commitments.
  • Monitor and block Shadow AI: Enforce MDM on mobiles, browser controls, and egress filtering to prevent uploads to unapproved AI apps. Track unusual summarisation and export activity.
  • Privilege-first workflows: Separate privileged mailboxes and repositories. Disable AI access by default; allow case-by-case with partner approval and documented client consent.
  • Incident readiness: Pre-build playbooks for AI data mishandling - containment, notification triggers, privilege review, and regulator engagement.
  • Independent validation: Run red-team style tests against AI features (prompt injection, label bypass, excessive data access). Verify vendor claims with evidence, not marketing.
  • Train your people: Give attorneys and support staff simple rules on what AI can and can't touch. Reinforce with real examples of privilege risk and sanctions.

Governance That Sticks

Make AI risk visible to leadership with a single register of tools, purposes, data categories, and legal bases. Anchor it to GDPR principles like purpose limitation and data minimisation. Reference the source law, not just policies: Regulation (EU) 2016/679 (GDPR).

Tie approvals to matters, not just departments. If an AI tool can read across the estate, treat it as a firm-wide system of record with change control, access reviews, and kill switches. Assume labels can fail and design compensating controls.

Where To Skill Up

If your team needs structured guidance on safe AI adoption in legal practice, start here: AI for Legal. For Copilot configurations, guardrails, and tenant hardening specifics, see Microsoft AI Courses.

The takeaway: Microsoft moved quickly, but responsibility sits with you. Trust is your product. Treat AI access to client communications as a core professional conduct issue - and prove, in writing and in logs, that your controls work.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)