AI in Legal Practice: Benefits, Risks, and the Duty to Verify

AI can speed drafting, research and summaries, but like a chainsaw it demands caution. Verify citations, protect confidentiality, and follow court guidance to avoid sanctions.

Categorized in: AI News Legal
Published on: Sep 26, 2025
AI in Legal Practice: Benefits, Risks, and the Duty to Verify

AI in legal practice: weighing benefits and risks

AI is here to stay. The real issue is how lawyers use it. A recent US decision captured it well: like a chainsaw, AI is useful but dangerous if mishandled. Use the tool with caution, and apply actual intelligence to its output.

Where AI helps now

  • Drafting emails and client communications.
  • Website chatbots to triage inquiries.
  • Building chronologies from scattered, unstructured materials.
  • Reviewing and summarising contracts, wills and other documents.
  • Comparing images (including facial images) for identity and matching.
  • Drafting contracts and other legal documents.
  • Indexing bundles and document sets.
  • Summarising facts, hearings and interview transcripts.
  • Finding relevant case law and legislation.
  • Drafting advice letters and submissions.

The risks you must manage

That same "chainsaw" can cause damage if used without checks. Common failure modes include:

  • Hallucinations: fabricated case citations, legislative provisions and quotes.
  • Confidentiality breaches: platforms training on client inputs and exposing data.
  • Copyright risks: AI output that reproduces protected content without licence.

Professional duties already cover this

Existing duties apply. Examples include:

  • Competence, diligence and honesty (r 4, Uniform Law): filing unverified, AI-generated citations falls short.
  • Do not mislead the court (r 19.1): fictitious authorities or AI-drafted affidavits that do not reflect a deponent's knowledge risk misleading.
  • Paramount duty to the court (r 3.1): providing inexistent legislation or authorities does not honour that duty.
  • Act in the client's best interests (r 4.1.1): copyright breaches and unreliable content harm the client.
  • Confidentiality (r 9): feeding client material into systems that train on it without consent may breach confidentiality.

Court rules reinforce this. Section 37M of the Federal Court of Australia Act 1976 requires efficient use of judicial resources; fake citations waste those resources. Under s 37N, courts can order practitioners to personally bear costs wasted by such conduct.

Federal Court of Australia Act 1976 (legislation.gov.au)

Are new rules necessary?

In most cases, no. The existing ethical framework already captures improper AI use. The deeper problem is overconfidence in AI outputs and a lack of fluency with its limits. The solution is verification, education and clear internal policies-not a rush to new legislation.

What courts and tribunals are saying

Some bodies have formalised guidance, others are consulting:

  • Federal Court of Australia: consultation announced on 29 April 2025; guidelines pending.
  • Administrative Review Tribunal: transparency statement confirming AI is not used to make review decisions; members must not use generative AI for reasons and must not input tribunal data; any AI research must be checked and verified.
  • Supreme Court of NSW (SC Gen 23, 28 Jan 2025): do not use generative AI to draft affidavits, witness statements or references; affidavits must disclose that gen AI was not used to generate their content; submissions that used gen AI must include verification that all citations exist, are accurate and relevant, and that evidentiary references are verified.
  • Supreme Court of Victoria (May 2024 guidelines): parties should avoid indirectly misleading others about AI's role; disclose AI use where needed to understand provenance and weight; exercise particular caution for affidavit and witness materials; AI is not used by the court to prepare reasons because it does not engage in context-specific reasoning.

What recent cases teach

Dayal [2024] FedCFamC2F 1166: A solicitor filed a list of authorities generated by an AI tool in practice software without verifying. The court stressed that generative AI outputs are not products of reasoning and do not replace legal research or judgment. The practitioner was referred to the regulator.

Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95: Written submissions included non-existent Federal Court and tribunal decisions. The representative admitted using ChatGPT and "it read well" so it went in unchecked. The court found the conduct fell below competence and diligence, risked misleading the court, and referred the practitioner to the NSW regulator.

JNE24 v Minister for Immigration and Citizenship [2025] FedCFamC2G 1314: Submissions cited non-existent or misapplied cases. The court noted AI can assist but is not a substitute for research, and unverified use risks conduct that could be construed as contempt (referencing Ayinde v London Borough of Haringey [2025] EWHC 1383 (Admin)). The practitioner was referred to the WA regulator and ordered to personally pay costs of $8,371.30.

Luck v Secretary, Services Australia [2025] FCAFC 26: A self-represented appellant relied on a fabricated authority. The court redacted the false citation to avoid polluting future datasets and perpetuating the error-a trend now seen across Australian decisions.

Practical protocol for safe AI use

  • Verify everything: confirm every citation, quote and legislative reference in primary sources and authorised reports.
  • No gen AI for evidence: do not use it for affidavits, witness statements or references; add the required "no gen AI used" disclosure where mandated.
  • Keep client data safe: do not input confidential material into public tools; use enterprise solutions with data-control, no-training assurances and audit trails.
  • Disclose and verify in submissions: where gen AI assisted, include a verification statement that authorities exist, are accurate and relevant; verify evidentiary references.
  • Supervise and review: junior staff or non-lawyers using AI must be supervised; maintain human review before anything reaches the court or a client.
  • Manage copyright: avoid prompting models to reproduce protected text; cite sources; use licensed databases for research.
  • Log your process: keep prompts, outputs, checks and sources; this helps QA, training and, if needed, explanations to a court.
  • Adopt a policy: codify approved tools, prohibited uses (e.g., evidence), verification steps and disclosure requirements.
  • Vendor due diligence: check data handling, training policies, jurisdiction, security certifications and indemnities.

Training and implementation

If your team is using AI, invest in practical training and policy roll-out. Clear standards and repeatable checklists reduce risk and improve throughput.

See curated AI learning paths by job role

Bottom line

AI is neither good nor bad. It's a tool. Used with human verification, it accelerates quality legal work. Used blindly, it misleads courts, hurts clients and invites costs and referrals.