VLRC calls for a ban on AI in judicial decision-making - with clear principles for safe court use
The Victorian Law Reform Commission (VLRC) has recommended a firm boundary for AI in the justice system: use it to support court operations, but keep it out of judicial decision-making. The Commission's report sets out principles, guidelines, and oversight to keep trust, independence, and accuracy intact across Victoria's courts and tribunals.
AI can help with speed and access. It can also introduce risk. As VLRC chair Anthony North cautioned, "There are issues with the security and privacy of information used in AI tools. We have also seen a growing number of cases where inaccurate or made-up AI-generated content has been submitted to courts."
The headline: no AI in judicial decision-making
VLRC recommends that judicial guidelines explicitly prohibit the use of AI tools in making judicial decisions. "To support public trust, we recommend that judicial guidelines prohibit the use of AI tools for judicial decision-making," North said.
The concern is straightforward: algorithmic influence on outcomes undermines judicial independence and confidence in the administration of justice.
What the Commission recommends
The report advances a principles-based approach, backed by education and governance. Key points include:
- Eight core principles to guide safe AI use across courts and tribunals.
- Practical guidelines for court users, judicial officers, and staff to put those principles into action.
- An AI assurance framework to assess, approve, and monitor new AI uses, with clear accountability and review gates.
- Training and education for lawyers, judicial officers, and the public on safe use, limits, and disclosure.
- A ban on AI for judicial decision-making to protect independence and public confidence.
For context and updates, see the Victorian Law Reform Commission. Given the focus on security and privacy, practitioners may also wish to revisit the Australian Privacy Principles.
Where AI is being tested now
Use in Victoria's courts and VCAT is still early. Ongoing pilots are exploring transcription, legal research, document review, and tools that blur or remove distressing images to reduce vicarious trauma for staff.
These are support functions-not decision-making. They're precisely where an assurance framework can add guardrails without compromising fairness.
What this means for legal practitioners
Expect clearer rules on how you can and can't use AI in filings and court interactions. Disclosure, verification, and confidentiality obligations will tighten-especially if you use AI for drafting or evidence handling.
Submissions built with AI must be checked against source law and facts. Citations need verification. Sensitive material should not be fed into public or vendor systems without a lawful basis, appropriate safeguards, and (where needed) client consent.
Practical next steps for firms and counsel
- Adopt an internal AI policy that covers approved tools, confidentiality, verification, record-keeping, and disclosure.
- Mandate human verification of all AI-assisted research, drafting, and citations. No exceptions.
- Protect client data: use enterprise tools with contractually defined data handling; disable training on client inputs.
- Create an assurance checklist for any AI use: purpose, risks, controls, auditability, and exit/rollback.
- Train your team on safe use, common failure modes (hallucinations, bias, leakage), and court expectations.
- Prepare for disclosure where required: who used AI, for what step, with what human oversight.
Governance in plain terms
- Before deployment: risk assessment, legal review (privacy, privilege, confidentiality), and sign-off by an accountable owner.
- During use: logs, versioning, and sampling to catch errors or drift.
- After incidents: corrective action, client/court notifications where appropriate, and policy updates.
Watch list for future reform
VLRC flagged areas that may need further work as AI use grows: administrative law implications, evidentiary rules, transparency standards, and ongoing monitoring of new risks. Courts will likely iterate on guidance as technology and use cases mature.
Upskilling resources
Given the emphasis on education, structured training can help teams meet new expectations. Explore role-based options here: AI courses by job.
Your membership also unlocks: