Deloitte's $1.6M Canadian healthcare report flagged for AI-linked errors - weeks after Australia refund
Deloitte's 526-page healthcare workforce report for Newfoundland and Labrador is under review after allegations of AI-related citation errors. The findings come shortly after Deloitte admitted using generative AI to support an Australian government report that contained inaccuracies and issued a partial refund.
Why this matters for healthcare leaders
Workforce planning, retention incentives, and virtual care strategy depend on solid evidence. If citations are fabricated or misattributed, cost models and recommendations can drift off course. This isn't just an academic issue-it affects budgets, staffing decisions, and patient access.
What happened in Canada
The province commissioned Deloitte for a report valued at roughly $1.6 million, delivered in May 2025. The Independent reported alleged problems including citations to non-existent papers, misattributed authors, and coauthor pairs that had never worked together.
Gail Tomblin Murphy, an adjunct professor at Dalhousie University, said the issues suggest heavy AI use. She was listed as an author on a paper that, according to the report, doesn't exist.
Deloitte's response
Deloitte Canada says it stands by the report's recommendations. The firm stated that AI was not used to write the report-only "selectively" to support a small number of research citations-and that it is revising the document to correct citation errors without changing the findings.
What happened in Australia
In October, Deloitte agreed to partially refund about $440,000 to the Australian government after a separate report was found to include incorrect references and fabricated citations. The review was commissioned by the Department of Employment and Workplace Relations (DEWR) in 2024 to assess a compliance framework and its supporting IT system for job seeker obligations.
Deloitte acknowledged using generative AI to support parts of that work but said the technology didn't alter the substantive findings. You can learn more about the department here: Department of Employment and Workplace Relations.
Practical takeaways for health systems
- Assume AI touched vendor deliverables unless stated otherwise. Require explicit disclosure of AI tools, prompts, and human review steps.
- Make citation integrity a contract term. Specify penalties or rework if fabricated or misattributed sources are found.
- Run automated checks before acceptance: reference resolvers (DOI/PMID), cross-author verification (ORCID), and de-duplication of sources.
- Insist on a reproducible evidence pack: PDFs, DOIs, and annotated excerpts that link each claim to its source.
- Segment evidence by decision type: clinical vs. operational vs. financial. Apply stricter thresholds for anything that drives safety, staffing, or cost estimates.
Procurement checklist for external reports
- AI usage declaration: tools used, where applied (summaries, literature scans, drafts), and human-in-the-loop signoffs.
- Source transparency: full bibliography with DOIs/PMIDs, author affiliations, and versioned links.
- Methodology clarity: inclusion/exclusion criteria for literature, date ranges, and quality grading.
- Third-party audit option: right to commission an independent reference audit before final payment.
- Corrigendum policy: timelines and responsibilities for correcting errors post-delivery.
Reduce AI-citation risk inside your organization
- Ban "AI-invented" sources: require that every citation be validated in PubMed, Google Scholar, or publisher sites before inclusion.
- Use AI for first-pass discovery, not final sourcing. Pair it with manual verification and tools that resolve DOIs and PMIDs.
- Adopt a two-tier review: content owner checks claims; a separate reviewer validates citations and data tables.
- Label AI-assisted sections in drafts, so reviewers know where to scrutinize more closely.
Bottom line
AI can speed up literature scans, but it also introduces silent errors if teams skip verification. For health leaders, the fix is straightforward: contractual guardrails, automated checks, and human accountability before recommendations touch policy or budgets.
If your team needs a structured way to build practical AI skills and governance habits, browse role-based options here: Complete AI Training - Courses by Job.
Your membership also unlocks: