Deloitte refunds part of AU$439k contract after AI-faked citations taint Australian government report

Deloitte refunded DEWR after AI-written sections with fake citations and quotes in a 237-page review. Agencies must demand AI disclosure, verify sources, and audit vendors.

Categorized in: Ai News Government
Published on: Oct 17, 2025
Deloitte refunds part of AU$439k contract after AI-faked citations taint Australian government report

Deloitte's AI fiasco forces refund: what government teams must change now

Australia's Department of Employment and Workplace Relations (DEWR) has received a refund from Deloitte after the firm admitted AI-written sections in a 237-page assurance review included fabricated references, quotes, and academic sources. The repayment was part of an AU$439,000 (£230,000) fee and is among the first public cases of a government clawback tied to undisclosed AI use.

The case has triggered a wider conversation inside agencies and procurement teams: if a top-tier consultancy can ship invented citations into an official review, what stops it from happening again?

What happened

In December 2024, DEWR engaged Deloitte for an "independent assurance review" of the Targeted Compliance Framework, an automated welfare system that penalised jobseekers for missed obligations. The report was published in July 2025.

After publication, a University of Sydney researcher, Dr Chris Rudge, found non-existent experts, papers, and an invented quote attributed to a federal court judge. Some supposed studies appeared to reference the University of Sydney and Sweden's Lund University, but the works did not exist.

Deloitte later confirmed that several references and footnotes were fabricated and that Microsoft's Azure OpenAI GPT-4o assisted with drafting. A corrected version was issued on 26 September 2025 with an AI-use disclosure and the false citations removed. Deloitte repaid the final instalment following discussions with the department. DEWR stated the core analysis and recommendations remain valid.

Relevant references: DEWR, Azure OpenAI Service.

Why it matters for government teams

Trust in public reporting depends on source integrity. Fabricated citations can mislead ministers, drive poor policy, and expose agencies to legal and reputational risk.

Undisclosed AI use by vendors weakens accountability. Without logs, disclosures, and verification, you cannot reconstruct how a conclusion was produced.

Immediate policy and process changes

  • Mandate AI-use disclosure in RFTs, contracts, and the front matter of all deliverables.
  • Ban autogenerated citations and quotes. Every source must be verified by a named human reviewer.
  • Require toolchain logs: model family, provider, version, prompts, and timestamps. Retain for audit.
  • Set remedies: fee reductions, refunds, and termination for undisclosed AI use or fabricated material.
  • Run independent spot-checks. Sample pages, trace every citation, and confirm quotes against official records.
  • Define data rules: no uploading sensitive or personal information to external models without approval and controls.
  • Assign accountable sign-offs: partner-level at the vendor and an SES officer inside the agency.

Procurement language you can reuse

  • Disclosure: "The supplier must disclose any AI tools used in producing the work, including model family, provider, and version."
  • Source integrity: "All citations must resolve to verifiable sources. Fabricated or untraceable sources are a material defect."
  • Audit access: "On request, the supplier will provide prompt and revision logs and maintain them for seven years."
  • Remedies: "Undisclosed AI use or fabricated material will trigger fee reduction or refund and may lead to termination."
  • Indemnity: "The supplier indemnifies the Commonwealth for losses arising from fabricated references or false quotes."

For internal teams drafting reports

  • Adopt a "no source, no sentence" rule. Facts need a document, dataset, or interview note behind them.
  • Verify every quote and judgment against the official record. No exceptions.
  • Prefer retrieval-supported workflows that cite the source next to the claim.
  • Keep a human-edited bibliography. Do not let a model generate references.
  • Train reviewers to spot AI tells: journals that do not exist, dead links, vague attributions, odd DOI patterns.
  • Maintain an AI register listing approved tools, versions, and permitted uses.
  • Use a publication checklist: disclosure present, citations verified, quotes confirmed, logs archived.

Questions to ask vendors now

  • Which AI models and versions were used? For what steps?
  • Who checked each citation and quote? Can we see the tracker?
  • Can you provide prompt logs and change history if requested?
  • What data left your environment? How is it stored and for how long?
  • What remedies do you accept if we find fabricated or unverifiable material?

Context for leadership

The timing is awkward for Deloitte, which recently announced a partnership with Anthropic to provide staff access to the Claude chatbot, part of a broader push by large firms to weave generative AI into daily work. Partnerships like these heighten the need for formal controls around sourcing, disclosure, and audit trails. See: Claude.

The bottom line

AI can help with speed, but public trust hinges on verifiable sources. Make disclosure, verification, and auditability non-negotiable for both vendors and internal teams.

If your team needs structured upskilling on prompt discipline, verification, and audit trails, review these options: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)