Proposal not a prophecy - When legal AI becomes default
13 February 2026
Legal AI won't "arrive" with a launch event. It will settle, almost unnoticed, into the tools you already use. If that happens, the question won't be which product you picked. It will be which platform picked you.
When the platform eats the vendor
Think SatNav. You used to buy a dedicated device, stick it on the windscreen, and treat it like mission gear. Then maps moved into the phone, and the box on the dashboard became aftermarket.
Legal AI is on the same curve. "Best tool" gives way to "default surface." Once a model is good enough by default, good enough, everywhere beats best, somewhere.
Pipes, not prose: what finance is telling law
Finance is running the public trial. Anthropic's push with Claude wasn't sold as a clever chatbot. It was packaged as a financial analysis solution: one interface over licensed feeds, internal data, and research, with links back to sources so users can check the work.
The moat isn't witty output. It's access. Connectors into enterprise data platforms and specialist providers let the model see what matters inside systems firms already pay for and trust. Recently, Claude Cowork plug-ins - including a Legal module - showed how role-specific packs can be wired straight into the stack.
Distribution then moves the needle. Claude working inside Excel puts the model where the real work sits - assumptions, stress tests, and models - so AI stops being a toy and becomes a feature. Partnerships like the one with LSEG add the legitimacy that satisfies procurement, compliance, and audit.
Takeaway for law: this is the blueprint - embed into the work surface, wire into trusted knowledge, and wrap it all in controls.
From add-on to default: when Word becomes the desk
If Excel is finance's desk, Word is ours. That's the pivot. Once AI lives inside the drafting surface, the battlefield shifts from point tools to the default environment.
Recent moves make the direction clear. Robin AI's capability has begun to fragment, and a meaningful slice of its talent has been absorbed into the Microsoft Word organisation. That doesn't mean Word becomes a law firm. It means the platform owner is absorbing legal-workflow muscle into the surface.
Expect the drafting surface to feel more legal-native by default: work within firm templates, suggest edits and alternatives, summarise and reconcile revisions, and pull context from document stores and comms - with connectors into DMS and matter systems - all fenced by enterprise controls. In practice, this looks like a Copilot-style layer across Word, Outlook, and Teams, built to cut friction, not replace judgement.
If you want a sense of where the surface is heading, see Microsoft's overview of Copilot in Word here.
What this looks like in practice
- Drafts inside firm templates with clause suggestions based on your playbook.
- Tracked-change summaries that call out risk, deltas from precedent, and unapproved terms.
- Context pulls from your DMS, knowledge base, email threads, and matter files via governed connectors.
- Source-linked outputs so reviewers can verify, not guess.
- Admin controls: logging, data boundaries, redaction defaults, and approval flows for external sharing.
A proposal, not a prophecy - but act like the ground will move
This isn't destiny. It's a plausible path. But the path matters because AI is moving closer to the desk, becoming more ambient and embedded - and with that comes epistemic risk. You can let this arrive by accident, or you can set policy and posture now.
What firms should do next
- Pick your default surface strategy. Decide whether you'll lean into Office (Word, Outlook, Teams) or maintain a neutral layer. Your platform choices will pick your vendor options for you.
- Get your data house in order. Consolidate document stores, fix metadata, tag privilege, and map access. Good connectors are useless if your knowledge is messy.
- Define "good enough" vs "specialist." Set thresholds for when the default model is fine and when to route to a specialist tool or human expert. Make the escalation path clear inside the workflow.
- Mandate verifiability. Require citations to sources, link-backs to documents, and change logs for anything AI touches. No source link, no trust.
- Pilot on low-stakes, high-volume work. Start with NDAs, engagement letters, or playbooked clauses. Measure cycle time, edit rates, and error classes.
- Write a one-page usage policy. What data can be sent, what stays local, who can approve connectors, and how to report issues. Keep it short and enforceable.
- Train for review, not prompt theater. Teach lawyers to verify sources, spot model failures, and apply firm style. Prompts matter, but review discipline matters more. If you need structured upskilling, see role-based options here.
- Audit the pipes. Demand SSO, data residency options, tenant isolation, SOC 2/ISO 27001, and full logging. Buy for integration and governance, not for demo sizzle.
- Tighten client communications. Update engagement letters to explain AI-assisted steps, review standards, and confidentiality guarantees. No surprises.
- Keep a manual fallback. Maintain non-AI playbooks and offline workflows for outages, sensitive matters, and jurisdictions that require stricter handling.
- Stand up a small "AI desk." Two to four people across KM, IT, risk, and a practising partner. Owns playbooks, connectors, metrics, and comms.
Bottom line
This is not about picking the cleverest tool. It's about picking a surface, cleaning your data, and insisting on pipes that you control. Do that and you can move when the ground moves - quickly, safely, and without losing your footing.
Your membership also unlocks: