Deloitte to refund part of $440,000 fee after AI errors in DEWR report
Deloitte will repay the final instalment of a $440,000 contract to the Department of Employment and Workplace Relations after a report was found to contain fabricated references and inaccurate footnotes.
Commissioned in December 2024 to review DEWR's compliance framework and IT system, the report was published in July and quickly flagged by academics for errors. It has since been corrected, with false citations removed and a new reference list added. DEWR confirmed the repayment will occur once the issue is resolved.
What happened
University of Sydney academic Chris Rudge first raised the alarm, identifying more than a dozen fake references, including a non-existent paper titled "The Rule of Law and Administrative Justice in the Australian Social Security System." He said the mistakes had "all the hallmarks" of AI-generated content.
Deloitte later acknowledged it used "a generative artificial intelligence (AI) large language model (Azure OpenAI GPT-4o) based tool chain licensed by DEWR and hosted on DEWR's Azure tenancy" during its process. The firm said the corrections did not affect the report's findings and that the matter was resolved with the client.
Why it matters for government teams
False citations erode trust, waste verification time, and put agencies at reputational risk. This isn't isolated: US lawyers were sanctioned after filing briefs with fabricated AI citations, underscoring how unchecked outputs can slip into official work.
Case: US lawyers sanctioned for fake AI citations (Reuters)
Immediate actions for agencies
- Make AI disclosure mandatory. Vendors must list the models, versions, and hosting used on every deliverable.
- Require source-backed citations. Enforce links/DOIs for all references and run automated checks (e.g., Crossref lookups).
- Hold back fees until validation passes. Include acceptance criteria like "zero fabricated references."
- Keep audit trails. Vendors must retain prompts, outputs, and edit logs for review.
- Assign expert reviewers. Pair AI-assisted drafts with subject-matter sign-off before publication.
- Ban model-invented citations. Only allow references verified in a reference manager or primary sources.
- Protect data. Use agency tenancy, restrict training on agency data, and define retention policies.
- Document incident response. Set timelines for corrections, reissuance, and public notes when needed.
- Lift capability. Provide staff AI literacy and prompt-review training to spot red flags early. Explore role-based AI courses
Procurement clauses to include now
- AI use disclosure: model name/version, hosting environment, and data handling.
- Quality warranty: no fabricated citations, references must be verifiable and accessible.
- Human approval: vendor executive and SME sign-off on accuracy and sourcing.
- Right to audit: access to prompts, outputs, and edit history for due diligence.
- Retention/holdbacks: tie final payments to successful verification and acceptance.
- Remedies: correction timelines, fee reductions, and refunds for noncompliance.
- Indemnity for reputational or administrative costs caused by fabricated content.
Build a safer AI workflow
- Draft with AI only in controlled environments and label AI-assisted sections.
- Validate facts and references with human reviewers and automated checks before circulation.
- Reconstruct every citation from the source, not from the model output.
- Publish a correction protocol so issues are fixed fast and transparently.
For broader policy alignment, review federal guidance on responsible AI use in public service. Digital Transformation Agency: AI guidance
The takeaway
AI can help with drafting, but it can't replace verification. Put disclosure, validation, and financial incentives on the table, and you'll keep speed without sacrificing accuracy or public trust.