AI Hallucinations Are Surging: Guardrails for Family Lawyers

Courts are sanctioning fake AI citations, yet family lawyers still use AI for proofreading and prep. Test tools, strip identifiers, verify sources, and disclose use.

Categorized in: AI News Legal
Published on: Oct 09, 2025
AI Hallucinations Are Surging: Guardrails for Family Lawyers

AI in family law: how to use it safely and effectively

Family law already has its share of AI cautionary tales. In Zhang v. Chen, counsel uncovered fake cases hallucinated by ChatGPT, leading to personal cost sanctions against opposing counsel. Ontario saw a similar scare in Ko v. Li, where a show-cause hearing followed citations to non-existent cases before the court accepted counsel's apology.

Still, smart practitioners aren't retreating. They're putting AI to work on proofreading, summarizing, litigation prep, and brainstorming witness outlines-while building guardrails that keep them out of trouble.

Why AI is hard to ignore

Lawyers are seeing value across their workflows-from marketing and intake to drafting and financial briefs. Major vendors now bake AI into everyday tools, which means you may be using it without realizing it.

One regulator has already flagged this trend. The Nova Scotia Barristers' Society notes that products like Microsoft 365 and Adobe Acrobat are enabling AI assistants by default. Awareness is now a competency issue.

How leading family lawyers are using AI-carefully

Some lawyers run draft submissions through an AI assistant to anticipate judicial questions before a chambers appearance. Others use legal-research copilots to map issues quickly, then verify everything on trusted services.

As one counsel put it: "AI may not say what you want it to say, and you don't have to use it. However, there could be pieces of it that are helpful and brilliant." The key is knowing where AI helps-and where it can hurt.

Four practical safeguards to protect your practice

1) Do your research

  • Test the tool on prior, closed matters. Compare outputs to known-good work product.
  • Learn the failure modes: hallucinations, stale training data, weak citations, and overconfident tone.
  • Prefer systems that show sources and links. If you can't trace the authority, treat it as unverified.
  • Run a short pilot with written success criteria: speed gains, accuracy thresholds, and review time saved.

2) Keep it confidential

  • Use enterprise versions with a written no-training guarantee, encryption at rest/in transit, and admin controls.
  • Strip client identifiers and sensitive facts from prompts. Use neutral placeholders wherever possible.
  • Turn off data retention where available. Export and store outputs in your DMS, not the vendor's cloud.
  • Get a Data Processing Addendum (DPA). Record the vendor, data flows, retention, and subprocessor list.

3) Put humans above machines

  • Apply the same supervision you expect for junior work. Nothing goes out without line-by-line review.
  • Verify every citation. Pull the case in CanLII, Westlaw, or Lexis. Print or save the first page to file.
  • Recheck numbers. If AI drafts income, support, or equalization schedules, redo the math independently.
  • Keep a short "AI review" checklist in each file so the verification step is documented.

4) Be transparent

  • Tell clients in your retainer that you use AI for drafting or analysis, under strict confidentiality and review.
  • Disclose AI use to the court when appropriate (e.g., where it improves clarity or where a practice direction applies).
  • Set an internal rule: log tool, prompt category, and reviewer. If you ban AI, it will go underground.
  • Bill fairly. If AI reduces drafting time, reflect that in fees or use alternative fee arrangements.

A simple AI-use protocol for family firms

  • Scope: Where AI is allowed (summaries, outlines, style edits) and where it is not (original legal analysis without verification, affidavits without review).
  • Privacy: No client names, addresses, or health/child information in prompts. Use anonymized facts only.
  • Tools: Approved list with settings (no training enabled, retention off, region set).
  • Sources: Cite-check policy-authorities must be verified in a recognized database before use.
  • Review: Named reviewer for each AI-assisted deliverable. Checklist filed to DMS.
  • Disclosure: Model clause in retainers and, where prudent, in filings.
  • Training: Quarterly refresh on hallucinations, prompt hygiene, and confidentiality.
  • Incident response: What to do if a hallucination or disclosure error is found (client notice, corrective filing, internal review).

Where AI helps today (and what to watch for)

  • Proofreading and plain-language edits: Helpful; watch for subtle meaning shifts.
  • Document summaries: Useful for VOIR dire prep; confirm key holdings and quotes in the source.
  • Issue spotting and witness outlines: Good for brainstorming; finalize strategy yourself.
  • Financial schedule drafts: Speedy; independently validate inputs, formulas, and outputs.

Hallucinations are rising-plan accordingly

Reported legal decisions involving hallucinated AI content are increasing. Treat this as an operational risk, not a rare edge case.

Growth by period

  • July-December 2023: 8
  • Jan-June 2024: 12
  • July-December 2024: 38
  • Jan-June 2025: 151
  • July-September 2025: 168

Reported cases by jurisdiction (since July 2023)

  • USA: 251
  • Australia: 28
  • Israel: 28
  • Canada: 27
  • United Kingdom: 15

Source: Damien Charlotin's database of reported legal decisions involving hallucinated content (as of Sep. 23, 2025).

Practical prompts that reduce risk

  • "List the top 5 issues a judge might raise based on these submissions. Do not cite law."
  • "Summarize this decision in 5 bullet points for a client update. No legal conclusions."
  • "Draft a neutral outline for cross-examination based on these facts. Flag assumptions."
  • "Rewrite this section for clarity at a grade-10 reading level. Preserve legal meaning."

Bottom line

AI will not run your practice for you. Used well, it will save time, reduce stress, and sharpen preparation-if you verify, protect confidentiality, and stay transparent.

Start with narrow, low-risk use cases, write down your rules, and keep humans in charge. That's how you get the upside without paying for the mistakes.

Want structured upskilling?

If your firm is formalizing policies and workflows, consider role-specific training. See curated options at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)