Deloitte to refund part of $440,000 fee after AI-generated errors surface in DEWR report
Deloitte Australia will return part of its $440,000 payment after delivering an assurance report to the Department of Employment and Workplace Relations (DEWR) that included AI-generated inaccuracies. The original version referenced academic works that do not exist and included a fabricated quote attributed to a Federal Court judgment in Amato v Commonwealth (2019).
The report, first released on 4 July 2025, was updated on 26 September 2025 to remove more than a dozen errors. DEWR posted the corrected version and a departmental statement on 3 October 2025, noting that the changes do not alter the report's findings or recommendations.
What went wrong
Internal checks identified the hallmarks of AI "hallucinations": invented citations and misquoted legal material. Deloitte has acknowledged incorrect footnotes and references in the assurance work that reviewed the IT system underpinning the Targeted Compliance Framework.
The department's update states the summary of the Amato proceeding was amended for accuracy and clarity. For reference, the Federal Court's judgment is publicly available on AustLII: Amato v Commonwealth (2019).
Department response
DEWR Secretary Natalie James said reviews are in progress to ensure decisions are made lawfully and with sound process, including a legal review and an independent assurance review of the system against policy and business rules. The assurance review was published on 14 August 2025 and later corrected to address citation accuracy.
While Deloitte has not commented publicly, DEWR confirmed the firm agreed to a partial refund. The firm's own advisory work frequently promotes AI literacy and human review-an irony not lost on observers.
Political pressure escalates
The Greens are calling for stronger consequences. Senator Barbara Pocock has urged a full refund of the $440,000 and consequences for delivering poor-quality work.
The party is also pushing for a ban on consultancy firms from future public service contracts if they act unethically or deliver substandard outputs. This comes after the widely reported PwC incident involving the misuse of confidential Treasury information.
What this means for APS leaders
This episode is a procurement and assurance lesson. If vendors use generative AI, you need upfront disclosure, enforceable quality controls, and clear acceptance criteria that include source verification.
Treat AI-produced content as draft material until verified by humans with domain expertise. Legal summaries, case law, and policy interpretations always require primary-source checks.
Procurement safeguards to adopt now
- Require supplier disclosure of AI use across the delivery chain, including subcontractors.
- Build in staged acceptance tied to evidence of source validation (citations, links, case references).
- Include audit rights for prompts, datasets, and tooling used in report generation.
- Mandate human-in-the-loop review by qualified specialists for legal, policy, and data claims.
- Add contractual remedies: holdbacks, rework at supplier cost, and refund clauses for accuracy failures.
- Set a minimum citation standard: primary sources preferred; every quote verifiable.
Operational checks for AI-assisted reports
- Source verification sweep: click every citation and confirm the quoted text exists and is accurate.
- Fact pattern cross-check: compare claims against primary documents (Acts, regs, case law, policy).
- Risk flagging: label sections that rely on AI-generated text pending human verification.
- System guardrails: disable auto-generated citations; use retrieval from approved corpora only.
- Audit log: preserve drafts, prompts, and review notes for accountability and FOI readiness.
Policy context and guidance
Agencies implementing AI should align with government ethics principles and practical assurance steps. For reference, see the Australian Government's AI ethics guidance: AI Ethics Principles.
If your team needs to upskill
If you're formalising human-in-the-loop processes, vendor oversight, or prompt governance, structured training can help. See curated options by role here: AI courses by job.
What to watch next
- Whether DEWR discloses further controls on supplier use of AI in assurance work.
- Government response to proposals for banning firms that deliver poor or unethical work.
- Updates to procurement rules to require AI-use disclosure and verifiable citations by default.
Bottom line for government teams: insist on transparency, verify sources, and embed consequences in contracts. AI can speed delivery, but public accountability demands proof, not placeholders.