Deloitte to refund $290,000 after AI mistakes and fabricated references in Australian government report

Deloitte will repay AUD 290,000 after AI-assisted report errors, including fake citations. Findings unchanged; calls for tighter AI checks in government.

Categorized in: AI News Government
Published on: Oct 08, 2025
Deloitte to refund $290,000 after AI mistakes and fabricated references in Australian government report

Deloitte to refund Australian government AUD 290,000 after AI-assisted report errors

Deloitte will issue a partial refund to the Australian government after a DEWR-commissioned review contained significant errors, including fabricated references and a misattributed court quote. The firm has updated the document, corrected more than a dozen references and footnotes, and will repay about AUD 290,000. DEWR says the report's findings and recommendations remain unchanged.

The incident has drawn criticism over how consultants use generative AI in government work. It also highlights gaps in procurement controls, validation processes, and accountability when AI tools are involved.

What went wrong

  • A seven-month review (around June 2025) cost AUD 440,000 and assessed the Targeted Compliance Framework used in welfare systems.
  • Errors included nonexistent academic citations and a fabricated quote from a Federal Court judgment, first flagged by academic Dr. Christopher Rudge.
  • An updated report corrected numerous references, rewrote the reference list, and fixed typographical issues.
  • Deloitte acknowledged incorrect footnotes and references and agreed to repay the final installment under its contract (about AUD 290,000).
  • DEWR stated the changes do not alter the report's conclusions.

Where AI fit in

The updated report disclosed use of a DEWR-licensed, DEWR-hosted Azure OpenAI GPT-4o toolchain. Deloitte did not directly attribute the errors to AI, but the faulty references are consistent with AI "hallucinations."

Labor Senator Deborah O'Neill criticized the approach and called for clearer scrutiny of who is doing the work and how. Dr. Rudge noted that while citations were flawed, the broad conclusions aligned with other evidence.

Why this matters for government teams

If you buy advisory work, you now own new risk. AI can speed up drafting, but it can also produce confident nonsense. That risk moves into your systems, policy, and public communications unless you set clear controls.

  • Treat AI-assisted content as "unverified" until checked by named subject-matter experts.
  • Require verifiable sources for every factual claim, legal reference, or statistic.
  • Build refund and remediation clauses that trigger on citation errors or provenance gaps.
  • Log all AI usage in deliverables (models, versions, prompts, datasets, and human reviewers).
  • Prohibit AI-generated legal quotes and case references without human verification against primary sources.
  • Maintain an auditable trail of drafts, notes, and source checks.

Procurement clauses you can adopt now

  • Tooling disclosure: Vendors must list all AI tools used, including model name/version and hosting location.
  • Provenance: Deliverables must include a bibliography with DOIs/URLs and a source map that ties claims to sources.
  • Verification: Named experts sign off on facts, legal citations, and key recommendations.
  • Quality gates: Define error thresholds and corrective timelines; link them to payment milestones.
  • Indemnity and refunds: Financial remedies for fabricated or unverifiable content.
  • Security: AI systems must meet your data classification, privacy, and retention requirements.
  • Audit rights: Access to working papers, prompts, and change logs on request.

Operational safeguards for agencies

  • Appoint an accountable executive for AI use in external deliverables.
  • Adopt a standardized "AI-use statement" for every report you commission.
  • Stand up a citation-checking workflow (human review plus automated link/DOI validation).
  • Run spot checks on legal quotes against primary sources before publication.
  • Keep a central register of vendor AI models approved for use with government work.

Capability building

Your people need the judgment to review AI-assisted work and the process discipline to catch errors before they hit the public record. Invest in training that covers prompt controls, citation standards, legal-source verification, and audit trails.

Explore role-based learning paths for public sector teams: AI courses by job.

Further reading