Australian courts hit an AI tipping point: judges are acting as "human filters"
Australia's most senior judge has issued a blunt warning: the courts have entered an "unsustainable phase" of AI use. Chief Justice Stephen Gageler says judges and magistrates are increasingly forced to act as "human filters" for machine-generated arguments that arrive dressed as legal submissions.
He sees the upside too. Used well, AI could help deliver civil justice that "aspires to be just, quick and cheap." But the pace of development is outstripping the profession's ability to assess its risks and rewards, and the gap is showing up in court.
What's happening in courtrooms
Self-represented parties and trained practitioners are filing machine-enhanced arguments, AI-assisted evidence, and drafted submissions without adequate verification. The result: judges are refereeing contests between competing machine outputs, rather than testing properly grounded law and fact.
This isn't theoretical. False precedents have been cited in multiple jurisdictions. In September, a Victorian lawyer became the first in Australia sanctioned for AI-generated false citations after failing to verify authorities.
The risk picture legal teams should plan for now
- Hallucinated citations and misstatements of principle presented with unwarranted confidence.
- Undisclosed AI assistance masking authorship and accountability.
- Privacy, privilege, and confidentiality exposure from uploading client material to third-party tools.
- Hidden bias and unequal access to tools, affecting procedural fairness.
- Evidence contamination where AI "summaries" reshape witness language or meaning.
Where AI can help-safely
- First-pass document review, chronology building, and issue spotting-paired with human verification.
- Drafting templates for directions, consent orders, and timetables-finalised by a lawyer.
- Discovery prioritisation and search term testing-supported by audit logs and sampling.
- Plain-language explanations of process for self-represented litigants-approved by the court.
Most jurisdictions have issued guidance, and a specialist review is underway by the Victorian Law Reform Commission. Broader law reform work, including on justice responses to sexual violence, is ongoing at the Australian Law Reform Commission.
Immediate actions for practitioners
- Disclose meaningful AI assistance in documents filed with the court where required by practice notes.
- Verify every citation in primary sources; add pinpoint references and attach authoritative copies where appropriate.
- Keep an audit trail: tool name, model/version, date/time, inputs provided, sources relied on, and human reviewer.
- Disable training/data retention in tools, or use enterprise instances with contractual safeguards.
- Never upload confidential material to public models; sanitise and use local or approved environments.
- Ban absolute claims from AI outputs. Require evidence, reasoning, and citations that a human checks line-by-line.
For courts and registries: regain control with simple guardrails
- Introduce an AI-use statement with filings: what tool, for what step, and who verified it.
- Add a certification checkbox at e-filing that counsel has verified all authorities and quotations.
- Publish a sanctions matrix for false citations and undisclosed AI use, scaled by culpability.
- Provide standard prompts/templates for safe uses (chronologies, neutral summaries) and ban risky ones (evidence "enhancement").
- Require logs for any AI-assisted disclosure or review and enable random spot-checks.
- Offer basic training for chambers and registry staff on recognising AI-generated artefacts.
Considering AI in decision-making? Non-negotiables
- Human decision-maker remains responsible; AI outputs are advisory only.
- Transparency: publish model type, data governance, and limitations in plain language.
- Reason-giving: parties can obtain the reasons relied on by the human decision-maker, not a black-box summary.
- Contestability: clear pathways to challenge AI-assisted steps; preserve the record.
- Impact assessments and bias testing before pilots; independent audits thereafter.
- Strict separation between tools used for administrative support and any tool touching adjudication.
Judicial wellbeing isn't a side issue-it's a system risk
Gageler urged judges and magistrates to speak openly about stress, mental health, and the vicarious trauma of cases, especially involving family or sexual violence. Threats of physical harm are real. A safe system of work is not optional-it's foundational to judicial independence and performance.
He also warned the system is failing many victims of sexual violence. Trauma-informed procedures, specialist lists, better support for complainants, and faster pathways to resolution are essential if the courts are to deliver justice and maintain trust.
A 30-day plan for legal leaders
- Publish a firm-wide AI policy aligned to court practice notes; set approval thresholds and prohibited uses.
- Stand up an "AI desk" to review prompts/outputs for live matters and maintain audit logs.
- Run a verification drill: sample 20 citations from recent filings; fix gaps and retrain teams.
- Engage clients: update engagement letters to cover AI use, confidentiality, and liability.
Bottom line
Courts are signalling a hard boundary: AI can help, but it can't replace judgment, verification, and accountability. Tighten your practices now and you'll reduce risk, cut waste, and help the system move from "human filters" to human-led quality control-where it belongs.
If your team needs structured upskilling on practical, low-risk AI use for legal work, see our curated tracks by role at Complete AI Training.
Your membership also unlocks: