Deloitte to repay part of $440,000 after AI-assisted Australian government report cited fake sources
Deloitte will refund part of a $440k fee after DEWR found AI-made errors and fake citations in a TCF review. Agencies are urged to enforce disclosure, source checks, and audits.

Deloitte to repay Australian government after AI-assisted report found with errors
Deloitte will refund part of a $440,000 fee after the Department of Employment and Workplace Relations (DEWR) found multiple errors in a report produced with generative AI. DEWR said the firm will repay the final instalment of the contract once processing is complete.
The 2024 engagement asked Deloitte to assess the Targeted Compliance Framework (TCF) and its supporting IT system, which issues penalties to job seekers who miss mutual obligation requirements. The report flagged "system defects," weak "traceability" to legislation, and an IT design "driven by punitive assumptions of participant non-compliance."
What went wrong
After publication in July, errors surfaced: fabricated citations and non-existent references. University of Sydney academic Christopher Rudge, who first identified the problems, linked them to AI "hallucinations," where models generate plausible but false references.
DEWR has reuploaded an updated version. It now includes an appendix acknowledging use of a "generative artificial intelligence (AI) large language model (Azure OpenAI GPT-4o) based tool chain" licensed by DEWR and hosted in its Azure tenancy.
Deloitte's position
Deloitte said the AI use did not change the substantive findings or recommendations. "The updates made in no way impact or affect the substantive content, findings and recommendations," the amended report states. A spokesperson added the matter "has been resolved directly with the client."
Labor senator Deborah O'Neill criticised the firm: "Deloitte has a human intelligence problem. This would be laughable if it wasn't so lamentable. A partial refund looks like a partial apology for substandard work." She urged agencies to verify who is doing the work they pay for and quipped that procurers "would be better off signing up for a ChatGPT subscription."
Why this matters for government buyers
This incident exposes a core risk: AI can accelerate drafting, but it also introduces silent failure modes-fabricated sources, misquotes, and misplaced certainty. If unchecked, those errors can shape policy advice, misinform ministers, and erode trust with the public and Parliament.
It also raises procurement and assurance questions. Agencies need stronger disclosure, validation, and accountability for any AI used in deliverables.
Immediate actions for agencies
- Mandate AI disclosure: Require suppliers to declare all AI tools used, where they run, and for which deliverables or sections.
- Source verification: For any citation, require accessible links or documents and run automated reference checks before acceptance.
- Human-in-the-loop: Insist on named subject-matter reviewers who sign off on accuracy, legality, and policy alignment.
- Provenance logs: Demand version history, prompts, and change logs for AI-assisted sections to enable audits.
- No AI-generated citations: Prohibit synthetic references; require primary sources and official records.
- Acceptance gates: Withhold final payment until independent validation confirms citations and key facts.
- Security and privacy: Confirm data handling, tenancy, and model configurations prevent exposure of sensitive or personal information.
- Rectification clauses: Include timelines, rework obligations, and fee reductions for factual errors or fabricated sources.
Contract and RFT language to add now
- Tool disclosure: "Supplier must disclose all AI systems used, their hosting location, datasets accessed, and purpose in the engagement."
- Attribution: "All claims must be traceable to verifiable sources. Supplier must provide a reference pack with accessible links or documents."
- Verification: "Agency may conduct automated and manual reference validation; failure triggers rework at no cost."
- Governance: "Maintain audit trails (prompts, versions, reviewers). Provide on request."
- Data controls: "No transfer of agency data to external or public models without written approval; enforce data residency requirements."
- Warranty: "Deliverables are accurate, lawful, and free of fabricated or misattributed citations."
- Holdbacks: "Reserve X% of fees until acceptance criteria are met, including citation verification."
- Sanctions: "Material misrepresentation or fabricated sources may result in fee reduction or termination."
For internal teams using AI
- Adopt an AI use policy that bans synthetic citations and requires source packs for every brief.
- Use retrieval from approved document repositories so outputs cite real, agency-held sources.
- Run reference validators and plagiarism checks on drafts.
- Keep an approvals checklist: legal, policy, privacy, and SME sign-off before publication.
- Document prompts and drafts; store them with the final deliverable for auditability.
Policy and context resources
Bottom line
AI can help with speed, but it cannot replace source integrity and expert judgment. Build disclosure, verification, and accountability into every engagement-internal or external-so errors get caught before they reach ministers or the public.
If your team needs structured upskilling on AI oversight and safe adoption, see our AI courses by job.