Deloitte to Repay Final Installment After AI-Fabricated Citations in Australian Government Report

Deloitte will refund part of an AU$440k DEWR contract after AI citation errors in a July review. The firm updated the report and says findings stand; DEWR will disclose repayment.

Categorized in: AI News Government
Published on: Oct 07, 2025
Deloitte to Repay Final Installment After AI-Fabricated Citations in Australian Government Report

Deloitte to refund Australian government after AI citation errors

Monday 06 October 2025, 3:50 pm

Deloitte will repay the final installment of its AU$440,000 (£216,000) contract with the Department of Employment and Workplace Relations (DEWR) after AI-generated citation errors were found in a July assurance review. The review examined a targeted compliance framework and IT system tied to a welfare process that automatically penalises jobseekers.

The report identified widespread issues, but subsequent checks in August uncovered incorrect and nonexistent references. Deloitte re-uploaded an updated version last Friday and noted the AI-related errors did not affect the report's substantive findings or recommendations.

What DEWR and Parliament said

"Deloitte conducted the independent assurance review and has confirmed some footnotes and references were incorrect," a DEWR spokesperson said. The updated report adds a disclosure about the use of generative AI in the appendix.

Senator Deborah O'Neill said the firm has a "human intelligence problem," adding that "perhaps instead of a big consulting firm, procurers would be better off signing up for a ChatGPT subscription." Deloitte Australia, which reported $2.55bn in FY2025 revenue, said: "The matter has been resolved directly with the client."

DEWR has confirmed the repayment will be made public once finalised. For department information and updates, visit DEWR.

Why this matters for government buyers

  • AI is becoming a standard part of vendor workflows. Without controls, basic citation checks can slip through and undermine public trust.
  • Disclosure gaps make it hard to judge the reliability of analysis used for policy and program decisions.
  • Payment structures that don't tie to verification encourage speed over accuracy.

Practical controls you can implement now

  • AI use declaration: Require vendors to disclose if, where, and how generative AI contributed to deliverables (models, prompts, data sources, and human review steps).
  • Source traceability: For every citation, require a working link, a persistent identifier (e.g., DOI), and an archived copy or snapshot. Sample and validate references before acceptance.
  • Named human attestation: A senior reviewer signs off that citations and extracts match source material. Include accountability in the contract.
  • Version control: Mandate change logs for any re-uploaded reports, with a clear summary of corrections and their impact.
  • Automated checks: Use reference validators to flag broken links, nonexistent sources, and hallucinated citations before delivery.
  • Milestone payments: Tie final payment to evidence of source verification and resolution of defects found during agency review.
  • Incident protocol: Set time-bound requirements for corrections, client notification, and public updates when errors are material.
  • Audit rights: Reserve the right to examine the vendor's quality assurance steps, including AI usage logs and reviewer notes.

Questions to add to your next RFP or SoW

  • Did you use generative AI? For which sections and tasks?
  • How do you verify citations and quotes? Which tools and human checks are used?
  • Who is the named reviewer responsible for factual accuracy and sources?
  • Provide a sample verification checklist and a redacted change log from a recent engagement.
  • How will you document corrections and communicate updates to the agency and the public?

Bottom line

AI can speed up research and drafting, but it doesn't replace verification. Government buyers should require clear AI disclosures, enforce source validation, and link payments to proof of quality. These steps protect program integrity and public confidence.

If your team needs structured upskilling on safe, effective AI use in government workflows, see Complete AI Training: Courses by Job.