Using AI They Don't Trust, Lawyers Push Ahead

Lawyers are adopting AI, but trust is thin-only 1 in 5 rate it highly, and many won't file AI drafts in court. Use is rising, with human review and guardrails now a must.

Categorized in: AI News Legal
Published on: Feb 20, 2026
Using AI They Don't Trust, Lawyers Push Ahead

Lawyers are pushing AI forward-without fully trusting it

Only one in five lawyers place high trust in AI-generated legal work. 67% have had to override or correct AI outputs, and 58% wouldn't feel comfortable submitting an AI-drafted document to a regulator or court. 42% say they have little to no trust in the tech at all.

Adoption is moving fast, but it's not smooth. Nearly half (47%) report AI-related conflict within their teams. The gap between deployment and confidence is now a management issue, not just a technology one.

Where AI is used-and where it's off-limits

  • Commonly automated: document classification, compliance alerts, risk flagging, legal research, and case law summarisation.
  • Kept off-limits: final contract approval (45%), ethics and compliance judgments (42%), and litigation decisions (37%).

Why trust is low

  • Accuracy and hallucinations: 57%
  • Data security and confidentiality: 51%
  • Liability exposure: 45%
  • Ethical risks: 44%

Concerns aren't abstract. Courts have criticised lawyers for citing fabricated case law generated by AI, and bars are clear that tech competence is part of professional competence. See the ABA's Model Rule 1.1 on competence (ABA Model Rule 1.1).

What would raise confidence

  • Mandatory human sign-off: 41%
  • Explainable decision-making: 20%
  • Built-in compliance guardrails: 17%
  • Unmoved regardless: 15%

The near-term shift in work and hiring

Despite scepticism, 62% expect AI use to rise moderately or significantly over the next year. 43% anticipate reduced hiring or staffing needs due to automation, while 21% expect to recruit more tech-savvy talent.

A practical playbook for legal teams

  • Adopt a human-in-the-loop policy: Require documented human review and sign-off for anything client-facing or court/regulator-bound.
  • Ban sensitive inputs by default: Specify what must never be pasted into AI tools; use redaction, DLP, and secure workspaces where needed.
  • Ground research in verifiable sources: Demand citations with links, run case law through trusted databases, and use checklists for cite-checking.
  • Stand up compliance guardrails: Role-based access, logging, audit trails, and approval flows for higher-risk use cases.
  • Test for hallucinations: Maintain a small validation set and track error rates; block models that fail threshold tests for certain tasks.
  • Standardise prompts and templates: Create approved prompt libraries and templates with disclaimers and acceptance criteria.
  • Vendor and model due diligence: Review data handling, retention, indemnities, and model update policies; negotiate liability terms.
  • Court and regulator protocol: Set rules on disclosure of AI use, citation verification, and who authorises submissions.
  • Insurance and responsibility: Map responsibility for errors across partner, associate, and vendor; align malpractice coverage.
  • Upskill the right roles: Train paralegals and junior lawyers on low-risk automations (classification, summaries, research) before expanding scope.
  • Measure value, not hype: Track turnaround time, error rates, cost per matter, and rework to decide where AI stays or goes.

Where to upskill next

If you're formalising policy and workflows, start with use cases the survey shows are working: research, summarisation, classification, and compliance alerts. For structured training built for legal teams, see AI for Legal.

Bottom line

AI is already in your matters. Treat it like any risky but useful associate: define the work it can do, review its output, log decisions, and hold it to standards. Confidence follows process.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)