Deloitte to Partially Refund AU$440,000 Government Report After AI-Generated Errors
Deloitte Australia will repay part of the AU$440,000 (US$290,000) fee for a government-commissioned report after errors were found, including a fabricated quote from a federal court judgment and references to research that does not exist.
The 237-page report for the Department of Employment and Workplace Relations (DEWR) was first published in July and revised on Friday after errors were flagged by an academic. Deloitte's review "confirmed some footnotes and references were incorrect," and the firm agreed to repay the final instalment under its contract. The department said the amount will be disclosed after reimbursement.
What happened
The report examined departmental IT systems and their use of automated penalties in Australia's welfare system. DEWR said the substance and recommendations remain unchanged, but the revised version discloses the use of a generative AI language system, Azure OpenAI, in drafting.
Quotes attributed to a federal court judge were removed, along with references to academic and technical reports that could not be verified. Deloitte said the issue was resolved with the client and did not comment on whether AI produced the errors.
Who surfaced the issues
Chris Rudge, a Sydney University researcher in health and welfare law, said he found up to 20 errors. One entry wrongly attributed a nonexistent book to Professor Lisa Burton Crawford, with a title that fell outside her area of expertise.
Rudge said work by his academic colleagues appeared to be cited as "tokens of legitimacy," without evidence it was read. "They've totally misquoted a court case then made up a quotation from a judge… that's about misstating the law to the Australian government in a report that they rely on," he said.
Senator Barbara Pocock, the Greens' spokesperson on the public sector, said Deloitte should refund the entire AU$440,000. "Misquoted a judge, used references that are non-existent - the kinds of things that a first-year university student would be in deep trouble for," she said.
Why this matters for government teams
Generative AI can speed up drafting but it can also fabricate citations and quotes. When reports inform legal compliance and program oversight, unverified AI content becomes a direct risk to policy, funding, and public trust.
This case makes one point clear: disclosure and verification are not optional. They are baseline requirements for any vendor deliverable that uses AI.
What to do now
- Mandate AI-use disclosure: Require vendors to declare if, where, and how AI tools are used (models, versions, prompts, plugins). Include this in statements of work.
- Enforce citation checks: For any legal, policy, or technical claims, require source URLs or bibliographic entries and human verification. For case law, demand pinpoint citations and confirmation by counsel.
- Tighten contracts: Add accuracy warranties, prohibition of fabricated citations, audit rights for drafts and source notes, and refund/withhold provisions tied to verification failures.
- Add a QA gate: Implement a citation and quote checklist. Require a named sign-off from the vendor's lead and your internal reviewer before acceptance.
- Secure the evidence: For sensitive deliverables, require an appendix with copies of key sources and a change log. Keep version history and decision records.
- Don't rely on AI "detection" tools: They are unreliable. Focus on outcome-based verification: facts, citations, and reproducible analysis.
- Set departmental AI policy: Define allowed use cases, required disclosures, and escalation paths for legal or safety-critical content.
- Train your staff: Build capability in prompt auditing, citation verification, and AI risk controls. See role-based options at Complete AI Training.
Key context and references
Department: Department of Employment and Workplace Relations
Tool disclosed in the revised report: Azure OpenAI Service