AI in Dispute Resolution Demands Caution: Lessons from Ayinde and Al-Haroun for Litigation and International Arbitration

Courts warn against careless AI in litigation and arbitration. Protect accuracy, confidentiality, data, and privilege with human checks, clear policies, and secure tools.

Categorized in: AI News Legal
Published on: Sep 27, 2025
AI in Dispute Resolution Demands Caution: Lessons from Ayinde and Al-Haroun for Litigation and International Arbitration

AI Risks Legal Sector Must Consider in Dispute Resolution

Courts are beginning to see the consequences of unguarded AI use. Two recent cases, Ayinde v. London Borough of Haringey and Al-Haroun v. Qatar National Bank and QNB Capital, signal the same message: use AI with caution in litigation. The same discipline should apply in international arbitration, where confidentiality and procedural integrity are central.

AI can speed up tasks, but speed without controls creates exposure. Treat AI as a junior assistant with no judgment, limited context, and a tendency to sound confident when it is wrong.

Key risk areas you should address now

  • Accuracy and duty to the court: AI can invent citations or misstate authorities. Every reference, quote, and extract requires human verification and source checking.
  • Confidentiality and privilege: Public or misconfigured tools may capture prompts and outputs. Avoid feeding client confidential information to models that train on inputs, and contract for enterprise tools with clear data-use terms.
  • Data protection: Check lawful basis, special category data handling, and cross-border transfers. Complete a DPIA where required and limit personal data in prompts.
  • Disclosure and record-keeping: Prompts, outputs, and intermediary notes may be disclosable. Decide what you retain, how you log AI involvement, and how you assert privilege.
  • Bias and fairness: Model outputs can reflect skewed data. Screen summarizations, rankings, and risk scores for discriminatory effects.
  • Witness evidence risks: AI-assisted drafting can contaminate memory and tone. Preserve the witness's own words and keep an audit trail of how drafts were produced.
  • Expert work product: If an expert uses AI, ensure methodology transparency, reproducibility, and error analysis. Agree disclosure boundaries in advance.
  • Court and tribunal expectations: Some fora require certification of human review or restrict AI use. Check standing orders and propose procedural terms if none exist.
  • Cybersecurity: Transcription, translation, and document-review tools expand the attack surface. Apply client-approved security standards to every AI vendor.

Practical guardrails for litigation and arbitration teams

  • Policy: Define permitted and prohibited use cases. Require pre-approval for any client-confidential or personal data use.
  • Tooling: Prefer enterprise deployments with no training on your data, access controls, logging, and jurisdictional hosting options.
  • Data minimization: Strip identities, use synthetic or masked facts, and keep sensitive documents out of prompts unless contractually protected.
  • Human review: Mandate line-by-line cite checks, source-linked drafting, and partner sign-off before filing or sending to the tribunal.
  • Disclosure strategy: Maintain a simple log of AI-assisted steps to manage privilege and potential disclosure disputes.
  • Procedural terms: For arbitration, propose an AI protocol and a cybersecurity protocol at the first case management conference.
  • Vendor diligence: Evaluate model provenance, data-use policy, retention, sub-processors, and incident response commitments.
  • Training: Give your team clear do's and don'ts, with examples tied to your practice areas.
  • Correction protocol: If an AI-induced error slips through, move fast: correct the record, notify the court or tribunal as appropriate, and document remediation.

Scenarios to test before real matters

  • Drafting skeleton arguments: Use AI for structure or issue lists, but prohibit final text or citations without human sourcing.
  • Translation and transcription: Approve tools in advance and confirm they do not retain content. For evidence, pair outputs with certified human review.
  • Disclosure review: Use AI for clustering and summaries under strict privilege screens. Validate findings with sampling and quality metrics.
  • Witness statements: Keep AI out of first drafts. If used for formatting, record that no substantive drafting was done by AI.
  • Expert reports: Require a statement on any AI assistance, with methodology and limitations explained.
  • Arbitration confidentiality: If using AI note-taking or realtime tools at hearings, obtain tribunal and counterparty consent.

What to do this quarter

  • Inventory every AI touchpoint in active matters and shut down unapproved use.
  • Adopt an interim policy and an approved tool list with data-use terms you can defend.
  • Run a controlled pilot in one matter with logging, review metrics, and post-mortem learning.
  • Align panel counsel and experts on AI protocols to avoid surprises at disclosure or hearings.

For professional conduct and risk, see the SRA guidance on AI. For arbitration security, adopt the ICCA-NYC Bar-CPR Cybersecurity Protocol.

Build team capability

Upskill your team on safe, efficient AI use. If you need structured training and tool overviews with clear do's and don'ts for legal workflows, explore these resources: