Deloitte faces calls to repay full $440,000 over AI-tainted DEWR report
Pressure is building on Deloitte to return its entire $440,000 fee after a Federal Government report it delivered contained AI-generated errors. The firm has repaid $97,587 so far, but senators and officials say that is nowhere near enough.
The Department of Employment and Workplace Relations (DEWR) says the original report included non-existent academic references and a fabricated quote from a Federal Court judgment. Deloitte resubmitted the report without the AI mistakes, but there has been no public apology.
What we know
- Senate Estimates heard that the agency detected errors only after media reporting, not through Deloitte's own disclosure.
- Deloitte advised that staff involved in the report have been told to undertake AI training. The Greens say that falls short of accountability.
- Finance officials described the conduct as "troubling." Senators Barbara Pocock and Penny Allman-Payne pressed for stronger action and a full refund.
- DEWR Secretary Natalie James said, "We should not be receiving work that has glaring errors in footnotes and sources... I am struck by the lack of any apology to us."
- Environment Minister Murray Watt said departmental processes must improve given the growing use of AI in contracted work.
Why this matters for government leaders
Outsourcing is meant to buy expertise you don't have in-house. If contractors are using AI, they must be transparent and accountable for accuracy. When footnotes and sources are wrong, agencies carry the risk-policy, legal, and public trust.
This case highlights a simple truth: consultant outputs require verification, and contracts must anticipate AI use. Refunds, clawbacks, and bans only work if the clauses exist and are enforced.
Immediate steps agencies can take
- Require consultants to disclose AI use upfront, including which tools were used, for what tasks, and how outputs were verified.
- Introduce a reference and citations check as a standard QA gate for any policy, legal, or research deliverable.
- Hold a portion of fees until evidence of human review, fact-checking, and source validation is provided.
- Mandate an auditable "sources pack" with every report: links, citations, copies of referenced materials, and quote provenance.
- Set penalties for fabricated references, unapproved AI use, and failure to disclose material issues-up to full fee recovery.
Contract language to add now
- AI use disclosure: no AI-generated content without prior written consent and a human verification plan.
- Accuracy warranty: all quotations, references, and legal citations must be verifiably accurate; fabricated or unverifiable content triggers fee clawback.
- Verification evidence: deliver a fact-check log, reference list with source files, and a named human approver for each section.
- Right to audit: the agency may review prompts, model versions, and change logs for any AI-assisted work, subject to security constraints.
- Performance remedies: staged payments, cure periods, and suspension or bans for repeat breaches (up to five years).
The political response
Senator Barbara Pocock called the work "wilful negligence," saying it looked like "expensive corner-cutting disguised as consultancy." She wants a full refund and an apology to the Australian public.
She also questioned the logic of outsourcing core work if contractors are further outsourcing to AI without proper checks. "Who is regulating the system of contracting?" she asked, noting that whistleblowers and media revealed the issues-not the contractor.
The Greens are urging the government to ban outsourcing of core work to "dodgy contractors," with bans of up to five years for unethical behaviour.
About the TCF findings
Even after corrections, Deloitte's report indicates that hundreds of thousands of welfare recipients face payment suspensions each year under the Targeted Compliance Framework (TCF). The government has not asserted that the TCF is lawful.
Senator Penny Allman-Payne said, "These payments can be the difference between food on the table or going hungry," calling for suspensions to stop if the system cannot be defended.
What to improve inside departments
- Stand up an AI assurance function across procurement, legal, and policy to review high-risk consultant outputs.
- Adopt a standard checklist for AI-risk scoring, reference validation, legal citations, and data security.
- Run targeted training for contract managers and SES on AI disclosure, verification, and remedies.
- Share lessons learned across agencies to avoid repeating the same failures.
Useful references
Upskilling for AI oversight (optional)
If your team needs sharper AI literacy for procurement, assurance, or policy review, consider short, practical courses that cover prompt risks, verification, and audit trails.
- AI courses by job - oversight and operations
- Prompt engineering fundamentals - verification and controls
Your membership also unlocks: