Deloitte to Refund Part of AU$440,000 After AI-Generated Errors in Welfare IT Report
Deloitte Australia will repay a portion of the AU$440,000 (US$290,000) fee for a government-commissioned report that included fabricated citations and a false quote attributed to a federal court judge. The Department of Employment and Workplace Relations said a revised version has been published, and Deloitte agreed to repay the final installment under its contract. The refund amount will be disclosed once processed.
The original 237-page report assessed the department's IT systems and their use of automated penalties in the welfare system. After concerns were raised publicly by Chris Rudge, a University of Sydney researcher in health and welfare law, Deloitte reviewed the document and confirmed incorrect footnotes and references. The department said the report's "substance" and recommendations remain unchanged.
What Happened
- The initial report included a fabricated judicial quote and citations to research that does not exist.
- The revised report now discloses that a generative AI system, Azure OpenAI, was used in drafting and has removed the false judicial quote and invalid references.
- Deloitte stated the matter has been resolved with the client. The firm did not answer whether AI produced the errors.
- Rudge said he identified up to 20 errors, including a nonexistent book attributed to Professor Lisa Burton Crawford, and described the judge misquote as a serious legal compliance issue.
- Greens Senator Barbara Pocock called for a full refund of the AU$440,000, citing improper AI use and academic misrepresentation.
Why This Matters for Government Teams
AI-assisted drafting can speed up analysis but introduces a real risk of "hallucination": confident claims that are false. In legal and compliance-heavy work, that risk becomes a governance problem-misstating case law can distort policy advice and expose agencies to error.
Disclosure alone is not enough. Procurement, review processes, and audit trails must catch fabricated references, misquotes, and source inflation before advice reaches decision-makers.
Action Checklist Before Accepting Vendor Reports
- Require written disclosure of any AI tools used, where in the workflow they were applied, and by whom.
- Mandate a source pack: full-text copies or links to every cited source; no unreachable or vague references.
- Verify all legal citations against official judgments (e.g., AustLII or equivalent) and confirm quotes word-for-word.
- Spot-check 10-20% of citations for accuracy; expand the sample if one error is found.
- Insist on named human reviewers and sign-off from a qualified subject-matter expert (legal, policy, technical).
- Document the review trail: who checked what, when, and what was corrected.
Procurement Clauses to Consider
- AI use and disclosure: Vendors must disclose models and services used (e.g., Azure OpenAI), prompts, and safeguards.
- Accuracy and verification: Zero tolerance for fabricated citations; misquotes are material defects requiring correction at vendor cost.
- Audit rights: Agency may audit sources, drafts, and version history, including AI interaction logs where feasible.
- Holdbacks and remedies: Tie final payment to a passed quality audit; include refund and rework provisions for detected fabrication.
- Data handling: Prohibit input of sensitive or identifiable data into public models without written approval and risk assessment.
Operational Guardrails for Internal Teams
- Establish an AI-assisted drafting policy covering disclosure, verification, and record-keeping.
- Create a standing "fact-check cell" for legal and citation review on high-stakes reports.
- Train policy and legal teams on AI failure modes, especially citation hallucinations and misattribution.
- Adopt the Australian AI Ethics Principles as a baseline for vendor and internal AI use: AI Ethics Principles.
Context and Next Steps
The department states the report's recommendations remain unchanged after corrections, but the incident highlights the need for stronger verification on any AI-assisted deliverable. Expect more scrutiny of vendor methods, source transparency, and legal accuracy in audit-style work.
If your team relies on external analysis, set a clear bar: disclosure, verifiable citations, and accountable human review. For capability building across roles, see practical AI training by job function: Complete AI Training - Courses by Job.