AI-assisted due diligence is standard. Accountability still sits with the solicitor.
AI review tools are now baked into transactional workflows. They scan thousands of documents, surface clauses, and compress timelines.
But when a restrictive covenant slips through, a change-of-control clause goes unflagged, or an indemnity is missed, the liability does not move to the vendor. It stays with the solicitor who relied on the output.
Delegation doesn't shift responsibility
The retainer still carries the implied duty to exercise reasonable care and skill. Automating part of the review is no different, in principle, from delegating to a trainee - you remain responsible for the result.
Courts will ask whether reliance on AI was reasonable and properly supervised. "Industry standard" will not be enough; expect scrutiny of human verification and quality control. Attempts to exclude negligence in engagement letters still face the reasonableness test under the Unfair Contract Terms Act 1977. Most clients have not expressly accepted that risk.
The risk perimeter is wider than you think
AI doesn't remove error; it changes its source. The legal consequence is the same. Following Manchester Building Society v Grant Thornton [2021] UKSC 20, professionals remain liable for losses within the scope of the duty they assumed. If your advice signals "no material encumbrances," a missed covenant likely falls squarely within scope - regardless of tool use.
Tortious duties can also reach foreseeable third parties, such as lenders or investors who rely on your report. Digital workflows don't narrow that exposure; they can extend it.
Supervision: the current gap
Rule 3.5 of the SRA Code of Conduct requires adequate supervision, but there's no practical guidance for AI use. Must every AI output be checked? Is sampling acceptable? Should clients be told when automation is used?
Firms are answering these questions alone. Some require dual human review; others lean on vendor testing. That inconsistency creates uneven standards and unclear liability boundaries. The SRA's principles say you're accountable regardless of tools, but practitioners need operational direction, not platitudes.
See the SRA Code here: SRA Code of Conduct (Rule 3.5).
What the SRA should publish now
- Minimum expectations for human verification of AI outputs, especially for high-impact items (change-of-control, restrictive covenants, indemnities, termination rights, security interests).
- Supervisory protocols for junior lawyers using AI platforms, including calibration exercises, second reviews, and documented sign-off.
- Clear guidance on client communication: tell clients when AI is used, what it covers, what it doesn't, and how it's supervised.
Commercial pressure vs professional risk
Deal timetables now assume AI speed. Shorter review windows tempt teams to lean on summaries and confidence scores.
Professional indemnity insurers are already asking which systems you use and how you verify outputs. Without regulatory anchors, the market will self-regulate - and produce divergent standards and inconsistent quality control.
Accountability in practice: steps firms can implement now
- Define materiality and clause taxonomies. Require human verification on high-risk categories; allow sampling only on low-risk items.
- Adopt a sampling policy tied to risk and AI confidence, with minimum sample sizes and escalation triggers.
- Build QA workflows: two-tier review for red flags, exception reporting, and mandatory checks where confidence is low or documents are unreadable.
- Vet vendors: run your own test sets, compare recall/precision against human baselines, and log changes when models update.
- Preserve an audit trail: inputs, prompts, settings, versions, reviewer notes, and sign-offs. If it's not documented, it didn't happen.
- Client disclosure: a short, plain-language notice explaining AI use, supervision, and limits. Obtain informed consent for any deviation from full human review.
- Engagement terms: avoid broad negligence exclusions that will fail the reasonableness test. Instead, define scope, assumptions, and reliance limits clearly.
- Competence and training: upskill reviewers and supervisors on AI limitations, false positives/negatives, and calibration. Test proficiency periodically.
- Insurer alignment: share policies with your broker and maintain evidence of verification and supervision procedures.
Bottom line
AI has changed how due diligence gets done. It hasn't changed who answers when something material is missed.
The SRA should move from encouragement to expectation: supervision, verification, and disclosure obligations apply equally to AI-assisted work. Clear guidance protects clients, clarifies liability, and supports confident adoption across the market.
Need to level up AI literacy across your transactional teams? Practical training helps reduce blind reliance and tighten supervision. Explore role-based options here: AI courses by job.
Your membership also unlocks: