AI in Dispute Resolution: Risks, Verification Duties, and Lessons from Recent Cases
English cases Ayinde and Al-Haroun show AI-made citations can trigger regulator referrals. Verify every authority, protect confidentiality, and keep lawyers accountable.

AI Risks the Legal Sector Must Consider in Dispute Resolution
Two recent English High Court decisions - Ayinde v. London Borough of Haringey and Al-Haroun v. Qatar National Bank - are a warning shot for legal teams using AI. Both involved fabricated authorities placed before the court, and both resulted in referrals to professional regulators. The same risks apply in international arbitration, where the margin for error is narrow and credibility is everything.
AI can create speed and cost savings across the dispute lifecycle. But without tight controls, it can also create false confidence, bad citations and ethical problems that compromise the case and the lawyer.
What "AI" Means in Disputes
Not all AI is the same. Predictive coding (technology-assisted review) uses machine learning to classify documents from a seed set, and has been accepted in English courts since Pyrrho Investments Ltd v MWB Property Ltd [2016]. It remains useful and relatively mature.
Generative AI works differently. Large language models predict the next words based on patterns in huge training sets. They can draft, summarize and analyze at scale, but also produce confident errors if left unchecked. Agentic systems extend this by chaining tasks and tools without constant human direction - helpful, but riskier if controls are weak.
Where AI Helps Today
- Disclosure and document review: prioritization, clustering, deduplication
- Factual and legal research: surfacing lines of argument and authorities
- Drafting support: outlines, issue lists, chronologies and first-pass summaries
- Data analytics: patterns in case timelines, contract data and opposing counsel behavior
Surveys of arbitration practitioners show a clear shift: fewer than 20% used AI "often" in the past five years across core tasks, but a majority expect to do so for research, analytics and document review in the next five years. The same surveys show concern about undetected errors and bias (51%) and ethical infractions (24%). That tension - usefulness versus risk - is the point.
Cautionary Cases: Ayinde and Al-Haroun
In Ayinde, counsel submitted five fake authorities. When challenged, the lawyer denied using GenAI and rejected that their conduct was improper. The court found the behavior "wholly improper," flagged the nondelegable duty not to mislead, and referred the matter to the professional regulator. The court noted that either deliberate fabrication or lying about AI use would be contempt.
In Al-Haroun, 18 of 45 cited authorities were nonexistent, including a fake decision attributed to the sitting judge. The solicitor admitted using public GenAI and relying on client "research" without checking. The court called it a "lamentable failure" to verify accuracy, referred the lawyer to the regulator, and warned that even inadvertent misstatements can be incompetent and grossly negligent.
Bottom line: every citation must be verified against authoritative sources before filing. AI can assist, but responsibility sits with the lawyer signing the document.
Use of AI by Arbitrators and Judges
Arbitrators and judges are adopting AI for administrative and research tasks, but there is little support for using AI to assess merits or draft reasons in awards and judgments. A reasoned decision requires transparent evaluation of arguments and evidence. As long as model reasoning remains opaque, trust is limited.
UK judicial guidance accepts practical use cases (e.g., summarizing documents) but bars reliance on AI for legal analysis or decision-making. Judges are reminded to verify outputs, keep confidential data out of public tools, and remain accountable for anything issued in their name. See the official guidance for details.
Guidance for Judicial Office Holders on Artificial Intelligence
Practical Guardrails for Counsel
- Verification by default: Shepardize/KeyCite every case; read the full text; confirm quotes and pincites.
- Citation integrity: ban AI from fabricating citations; disallow auto-citation features without human check.
- Confidentiality: never paste client-identifying or sensitive data into public tools; use approved, enterprise deployments with data controls.
- Model limits: treat outputs as unverified drafts; require human review for analysis, legal conclusions and filings.
- Source transparency: prefer outputs with links to primary sources; distrust "summaries" without verifiable references.
- Recordkeeping: keep prompts, outputs and verification notes; log who checked what and when.
- Bias checks: stress-test results for skew or selective quoting; compare against authoritative treatises and primary materials.
- Vendor diligence: assess training data, privacy terms, jurisdiction, indemnities and audit rights.
- Team policy: write a short, enforceable AI policy covering approved tools, prohibited uses, review standards and sanctions for breach.
- Client alignment: agree on AI use boundaries in engagement letters where appropriate.
Practical Guardrails for Arbitrators and Tribunals
- No AI for merits assessment or drafting the tribunal's reasoning.
- Permitted uses: admin summaries, organization, timeline extraction - with human verification.
- Confidentiality: use only secure, non-public tools; avoid uploading the record to public models.
- Disclosure: consider protocol language with parties on acceptable AI uses during the case.
- Bias awareness: do not rely on AI characterizations of witnesses, experts or counsel.
Checklist Before You File
- Every authority verified against primary sources; quotes and pincites double-checked.
- All AI-generated text reviewed by a qualified lawyer; risky assertions removed or supported.
- No confidential or privileged content exposed to public tools.
- Clear audit trail of tools used, reviewers and verification steps.
Policy and Ethics: The Floor, Not the Ceiling
Regulators and institutions are issuing helpful guidance. Adopt them as minimum standards, then add controls that match your matters and risk tolerance. The duty of honesty, integrity and the obligation not to mislead are unchanged; AI does not dilute them.
Bar Council: Considerations when using Generative AI
Forward View
Use AI to reduce low-value work and surface insight faster. Keep lawyers in the loop for anything that affects strategy, evidence and the record. The firms and chambers that win here will be the ones who pair speed with discipline.
Further Learning
If you are building an internal training plan for safe and effective AI use by job role, explore these resources:
This publication is for general information only and does not constitute legal advice.