Deloitte's AI-Assisted Welfare Report Triggers Refund: What Government Teams Should Do Next
Deloitte's Australian member firm will part-refund the government after a $290,000 welfare compliance report was found to include AI-generated errors. The 237-page report used Azure OpenAI and contained fabricated citations and a quote wrongly attributed to a federal court judgment.
A revised version now discloses AI use, removes the false references, and replaces the original July publication. Deloitte says the changes do not alter the report's findings and recommendations, and the firm says the matter is resolved with the client.
What Changed in the Updated Report
- Explicit disclosure that a generative AI system (Azure OpenAI) was used.
- Removal of fabricated citations and a misattributed judicial quote.
- Clarification that the substantive conclusions remain the same, according to Deloitte.
Why This Matters for Government
AI can speed up research and drafting, but it can also invent sources, quotes, and facts. If unchecked, that risk flows straight into official policy, enforcement, and public trust.
This incident shows the need for clear rules on AI use in vendor deliverables, stronger verification, and enforceable accountability.
Immediate Actions for Procurement and Program Teams
- Require written AI-use disclosure for every deliverable: tools, model/provider, version, prompts/workflows, datasets, and human review steps.
- Mandate human verification of every quote and citation, with a signed attestation and a verification log shared with the client.
- Introduce pre-delivery checks: broken or unreachable links, non-existent publications, mismatched authors/fields, and quotes without primary sources.
- Adopt a standard citation package: PDFs or archived copies of all sources, plus a source-to-assertion trace.
Contract Language You Can Add Today
- AI Disclosure Clause: "Supplier will disclose any AI systems used, including provider, model, version, and prompts/workflows materially affecting content."
- Verification Clause: "Supplier certifies that all citations and quotes have been verified against primary sources and will provide a verification log upon delivery."
- Audit Trail Clause: "Supplier will maintain and provide on request an audit trail of drafts, prompts, outputs, and human edits."
- Quality and Remedies: "Hallucinated sources or misattributed quotes constitute a material defect. The client may require remediation at no cost and apply fee reductions or refunds."
- Use Restrictions: "No synthetic citations; quotes must be sourced to primary records. Generative summaries must include source lists with working links."
Technical Controls to Demand
- Source-grounded generation (RAG) with inline citations that resolve to verifiable documents.
- Automated link and reference checking before submission.
- Named-entity and quote verification against primary legal and academic databases.
- Versioned logging of prompts and outputs for reproducibility and review.
Governance and Oversight
- Adopt an AI risk framework and require vendors to align and evidence controls. See the NIST AI Risk Management Framework.
- Request third-party assurance (e.g., AI management system certifications or equivalent independent attestations).
- Run independent technical and legal reviews of high-impact deliverables before publication.
- Set clear penalties, clawbacks, and reporting timelines when defects are found.
Red Flags That Suggest AI Hallucination
- Citations to journals or books that do not exist, or titles outside an author's known field.
- Quotes that cannot be traced to a primary judgment or official record.
- Reference lists with dead links, placeholder text, or duplicate citations with minor edits.
- Overconfident language paired with vague sourcing ("studies show" without a source bundle).
Context You Should Know
A researcher spotted errors after the report attributed a non-existent book to a Sydney University law professor. The Australian Financial Review reported the issue in late August, prompting Deloitte's review and corrections.
In parallel, the firm has announced major AI investments and partnerships, including a multibillion-dollar plan through FY2030 and access to new models for hundreds of thousands of staff. Regulators have also warned that audit quality can suffer without proper controls around AI use.
What This Means for Public Sector Leaders
Set clear expectations now. If vendors use AI, fine-make them prove accuracy, show their work, and accept consequences when they fail basic standards.
This is less about banning tools and more about building verifiable truth into your process.
Next Steps
- Update RFPs and SOWs with AI disclosure, verification, and audit clauses.
- Stand up a quick review cell to spot-check citations and quotes before publication.
- Require a remediation plan and refund terms in case of synthetic or misattributed content.