Self-represented Australians are using AI in court-and it's backfiring

Since 2022, 84 Australian cases have involved AI, mostly self-represented. Courts report fake citations, delays, costs, and privacy risks-verify sources before filing.

Categorized in: AI News Legal
Published on: Sep 29, 2025
Self-represented Australians are using AI in court-and it's backfiring

Australians are using AI in their court cases (and it isn't going well)

Judges get why self-represented parties turn to free tools. As one County Court judge put it this year, generative AI can be "beguiling" when the task feels overwhelming. The catch: if you don't verify what it produces, you risk damaging your case.

What the data shows

Since late 2022, at least 84 Australian cases have involved generative AI. More than three-quarters (66 of 84) were self-represented litigants. Matters spanned property and wills, employment, bankruptcy, defamation, and migration.

Self-representation is already common. In 2023-2024, 79% of litigants in migration matters at the Federal Circuit Court were unrepresented. Courts are responding with plain-language forms and guidance, but the rise of free AI tools is adding a new failure point.

What courts are saying now

Queensland courts recently warned that inaccurate AI-generated content can delay matters and even attract costs orders. In New South Wales, the Chief Justice noted a party was candid about using AI, but found many AI-generated submissions "misconceived, unhelpful and irrelevant." Good faith won't fix bad inputs.

The risks if AI gets it wrong

  • Fabricated authorities: Fake cases or misquoted holdings lead to rejected submissions.
  • Outcome risk: Invalid evidence or argument means you can lose a valid claim.
  • Adverse costs: Courts may order a self-represented party to pay the other side's costs.
  • No safety net: If a lawyer relies on fake law, negligence is in play. If you are self-represented, you bear the loss.
  • Confidentiality breaches: Putting private or suppressed information into a chatbot can expose it publicly and breach orders.

Lower the risk: do this instead

  • Use primary sources first: Check legislation and cases directly on reliable databases such as AustLII and JADE.
  • Follow court guidance: Many courts (e.g., Supreme Courts of QLD, NSW, VIC) publish rules on acceptable AI use. Read and apply them.
  • Verify every citation: Search the case yourself. Confirm it exists, the court and year are correct, and the passage supports your proposition.
  • Protect confidentiality: Do not paste names, facts under suppression, or privileged material into public AI tools.
  • Use libraries: Court and university law libraries offer free guides and textbooks that summarise principles accurately.
  • Get human help where possible: Duty lawyers and community legal centres can sanity-check strategy or filings.

If you still plan to use AI

  • Limit scope: Use AI for structure, checklists, or drafting templates-not for legal research or citations.
  • Cross-check everything: Treat AI outputs as unverified notes. Validate against primary materials before filing.
  • Keep an audit trail: Record prompts, versions, and your verification steps in case the court asks.
  • Avoid sensitive data: Redact and generalise facts when testing wording or structure.

For legal teams and chambers

  • Adopt an AI-use policy: Define approved tools, prohibited tasks (e.g., research), and verification standards.
  • Create a citation checklist: Mandatory case verification, pinpoint references, and quote accuracy checks.
  • Train your people: Baseline AI literacy plus legal research refreshers reduce errors and rework.
  • Monitor and review: Periodic file audits to catch hallucinations, confidentiality risks, and drift from policy.

Access to justice is the problem; AI isn't the shortcut. Until tools can guarantee source-grounded accuracy and privacy, treat AI as a drafting assistant at best-and never as your authority.

If your practice is building safe, baseline AI literacy for staff, see curated options by job role: Complete AI Training.