Lawyers referred to regulators after AI errors in court filing
Three Australian lawyers - a South Australian solicitor and two Victorian barristers, including a King's Counsel - have been referred to their state regulators after a court filing used AI and included "erroneous citations". The Federal Circuit and Family Court of Australia found the document contained inaccurate and misleading references to case law described as AI "hallucinations".
What happened
The court said the appeal document "appeared to have been drafted with the assistance of artificial intelligence" and that "the use of AI at least resulted in erroneous citations". The lawyers later confirmed AI was used and apologised, accepting responsibility and assuring the court it would not happen again.
The South Australian solicitor told the court she did not use AI herself and that her paralegal had used it. She accepted full responsibility and said she had ended the paralegal's engagement. An amended filing removed non-existent and misleading authorities from the footnotes.
For context on the court referenced in this matter, see the Federal Circuit and Family Court of Australia.
The court's concerns
The judgment highlighted risks to confidentiality and privilege, warning that entering draft documents into an AI program could breach rules around subpoena material or waive legal professional privilege. It also underscored the risk of false authorities being introduced into submissions.
Costs and immediate fallout
The court ordered the South Australian solicitor to pay the other party's costs, on top of more than $35,000 in court costs. By consent, she was also ordered to pay a further $10,000 for costs thrown away correcting the AI-generated errors.
The three practitioners have been referred to their respective regulators and may face further consequences. In Victoria, that is the Legal Services Board and Commissioner.
Why this matters for practitioners
This is not a debate about whether AI can help. It's about professional obligations: accuracy, candour to the court, confidentiality, and proper supervision. If AI touches a filing, you own the output - including its mistakes.
Action checklist for legal teams
- Prohibit uploading confidential, privileged, or subpoena material into public AI tools. Use vetted, enterprise-grade options with data controls if AI is permitted.
- Mandate human verification of all citations, quotes, and references against primary sources (official reports, AustLII, Jade, subscription databases).
- Require practitioner sign-off on any submission that involved AI at any stage of drafting.
- Keep an internal log of where AI was used, by whom, and how it was verified.
- Set clear supervision rules: no unsupervised AI use by juniors, paralegals, or contractors. Document training given to support staff.
- Establish redlines: AI may assist with structure or style, but not with legal research or citations unless independently verified.
- Follow any court-specific directions on the disclosure or prohibition of AI use.
Firm policy essentials
- Approved tools list, data handling standards, and do-not-upload rules.
- Verification protocol for authorities and facts before filing.
- Supervision framework that names responsible practitioners.
- Incident response: how to correct the record, notify the court if needed, and contain further risk.
- Training cadence for all staff, including contractors.
Bottom line
AI can speed up drafting, but it can also inject errors that put your practice at risk. Treat it like a junior assistant with no judgment: useful with guardrails, dangerous without them.
Resources
If you're formalising AI use in your practice, structured training helps. Explore role-based options here: Complete AI Training - Courses by Job.
Your membership also unlocks: