Deloitte to refund government after AI-tainted $440k report riddled with fake citations

Deloitte used AI on a $440k DEWR report that included fake citations and an invented court quote. A revised version is posted and the government will get a partial refund.

Categorized in: AI News Government
Published on: Oct 06, 2025
Deloitte to refund government after AI-tainted $440k report riddled with fake citations

Deloitte admits AI use in $440k report; government to receive partial refund

Deloitte Australia has confirmed that artificial intelligence was used to produce a $440,000 report for the Department of Employment and Workplace Relations (DEWR). The report contained multiple errors, including three nonexistent academic references and a fabricated quote attributed to a Federal Court judgment.

A revised version was uploaded to the DEWR website, removing more than a dozen nonexistent references and footnotes, rewriting the reference list, and fixing typographic errors. Deloitte will issue a partial refund to the federal government.

What happened

The original report was delivered with material that did not meet basic verification standards. Several citations were false, and at least one quote was invented. These are failure points that should be caught by both the supplier and the commissioning agency before publication.

DEWR has now posted an updated report that removes incorrect material and corrects errors. The incident raises clear questions about vendor assurance, AI disclosure, and quality control in government-commissioned work.

Why this matters for government buyers and program leads

  • Trust and credibility: False citations and invented quotes erode public trust and can undermine policy work built on the report.
  • Legal and procurement risk: Misstatements tied to court judgments or legislation can trigger legal exposure and contract disputes.
  • Operational cost: Rework, version control, and reputational management add hidden costs beyond the contract price.
  • AI governance gaps: If vendors use AI without disclosure or controls, agencies inherit the risk of unverified outputs.

Immediate actions to take

  • Require AI-use disclosure: Mandate that vendors state if, where, and how AI tools were used in analysis, drafting, or citations.
  • Verification protocol: Set a checklist for citation validation, quote verification against primary sources, and fact crosschecks before acceptance.
  • Deliverable acceptance gates: Hold payment milestones until independent validation is completed and documented.
  • Version control: Require a change log for every revision, showing what changed and why (especially references and quotes).
  • Retain evidence: Ask for an audit pack with source documents, datasets, and research notes to validate claims.

Contract language you can add or tighten

  • AI disclosure clause: Vendors must disclose all AI tools used, their purpose, and safeguards applied, prior to submission.
  • Citation warranty: All references must be verifiable and accessible. Fabricated or unverifiable sources constitute a material breach.
  • Primary-source requirement: Quotes must be traced to official judgments, legislation, or publications, with links or citations to primary sources.
  • Right to audit: Agency may inspect working files, prompts, outputs, and human review notes.
  • Remedies: Define fee reductions, rework at vendor cost, and refund triggers for nonconformant deliverables.

Quality checks for policy, legal, and audit teams

  • Reference triage: Sample-check every reference first. If two or more are invalid, expand to 100% review.
  • Quote verification: Confirm each quote against the primary source (court database, legislation, or official publication).
  • AI fingerprint indicators: Look for generic phrasing, inconsistent citation styles, or references that don't resolve.
  • Plagiarism and hallucination checks: Use plagiarism detectors and run random paragraphs through source searches.
  • Attribution and transparency: Require a methods section stating who wrote what, what tools were used, and how facts were checked.

Governance upgrades to reduce risk

  • Pre-qualification: Screen vendors for AI governance maturity and ask for their internal QA process and tools.
  • Template pack: Provide agencies and vendors with standard templates for disclosure, reference lists, and verification logs.
  • Training: Upskill contract managers and reviewers on AI limitations, citation validation, and prompt audit trails.
  • Central support: Establish a specialist review pool for high-risk deliverables (legal, statistical, or technical reports).

Procurement references and further reading

For teams building capability

If your unit is updating review checklists or training staff to assess AI-assisted outputs, explore structured learning tracks by role here: Complete AI Training - Courses by Job.

The takeaway is simple: disclosure, verification, and accountability must be non-negotiable. AI can speed up drafting, but it cannot replace source checks, legal accuracy, or professional judgment. Tighten contracts, enforce validation, and pay only for work that stands up to scrutiny.