Canadian province paid $1.6M for a Deloitte health report. An investigation found false citations. Here's what government buyers should do now.
A 526-page Deloitte report commissioned by Newfoundland and Labrador's Department of Health and Community Services contained false and potentially AI-generated citations, according to an investigation by the Independent. The government paid just under $1.6 million for the report, which covered virtual care, retention incentives, and the pandemic's impact on health-care workers amid staffing shortages.
Deloitte said it stands by the report's recommendations and is making "a small number of citation corrections," adding that AI was not used to write the report but was used selectively to support some references. The investigation found citations to academic papers that do not exist, real researchers listed on papers they didn't work on, and fictional coauthorships. One cited paper from the Canadian Journal of Respiratory Therapy could not be located in the journal's database.
Gail Tomblin Murphy, an adjunct professor at Dalhousie University, said she was cited on a paper that doesn't exist and that she had worked with only three of the six other names listed in the false citation. As of now, the report remains available on the province's website, and the new premier's office has not publicly addressed the issue.
Why this matters for public-sector leaders
Policy, budgets, and staffing decisions rely on evidence that must be verifiable. If citations are fabricated or misattributed, the analysis built on them can mislead decisions and undermine public trust.
This isn't an isolated flare-up. In Australia, a separate Deloitte study used Azure OpenAI to help produce a government report; it was later revised after hallucinated references and a fabricated court quote were flagged. The Australian government received a partial refund.
Immediate steps if your department received a consultant report with suspect citations
- Pause reliance on the report's recommendations until citations are verified.
- Request a full errata list, version history, and a detailed methodology appendix (how sources were found, inclusion/exclusion criteria).
- Require written disclosure of all AI tools used, where and how they were used, and who reviewed AI outputs.
- Run a rapid reference audit: sample 20-30% of citations across sections, confirm each paper exists, matches the claim, and the listed authorship is accurate.
- Use independent databases (Crossref, PubMed, Google Scholar, Scopus) and check DOIs. Spot-check author affiliations and publication years.
- Ask for working files: spreadsheets, statistical code, cost-effectiveness models, and data sources to replicate key calculations.
- Document everything. If material errors are found, escalate to legal and procurement for remedies.
Procurement clauses to add before your next engagement
- AI usage disclosure: Vendors must declare any AI tools used, where, and under whose supervision.
- No synthetic citations: All references must be verifiable, with DOIs/URLs and access dates.
- Verification duty: Vendor certifies references have been checked for existence, authorship, and relevance.
- Evidence locker: Delivery must include reference database, search strings, inclusion/exclusion criteria, and all analysis files.
- Right to audit: Government can audit methodology and citations; vendor must cooperate within defined timelines.
- Error remediation: Timeline and process for corrections; material errors trigger fee reductions or refunds.
- Indemnity: Vendor is liable for damages arising from fabricated or materially misleading content.
- Named accountability: Lead author(s) and quality reviewer(s) sign off on accuracy and completeness.
- Conflicts disclosure: Declare financial or institutional ties that could bias source selection.
A quick checklist to verify health economics and workforce claims
- Cost-effectiveness: Confirm sources for unit costs, utilities, and assumptions; rerun sensitivity analyses.
- Workforce data: Trace vacancy rates, turnover, and retention drivers to original surveys or administrative data.
- Comparative claims: Ensure international comparisons adjust for system differences and time periods.
- Pandemic impacts: Validate period studied, data recency, and how confounders were handled.
Governance moves to reduce risk across your portfolio
- Adopt an AI risk management approach aligned with public guidance, such as the NIST AI Risk Management Framework. Read the framework.
- Stand up a reference-checking SOP: standardized sampling rate, verification tools, and sign-off roles.
- Create a "no surprises" policy: Any AI use in deliverables must be disclosed before contract award.
- Require structured citations in deliverables (DOI, title, authors, year, URL) and machine-readable reference lists.
- Introduce red-team reviews on high-impact reports: an internal or third-party unit tasked to find errors.
- Publish correction logs for transparency on major studies that inform policy.
Communications playbook if this happens to you
- State the facts: what was found, who is verifying, and expected timelines.
- Announce interim safeguards (pause implementation, independent review).
- Commit to publishing the errata, revised report, and any contract remedies.
- Share the updated procurement controls you're putting in place.
Building team capability
If AI is entering consultant workflows, your teams need the literacy to question, verify, and escalate. Start small: citation audits, AI-use disclosures, and a clear verification owner on every project.
If you're upskilling staff who review AI-assisted deliverables, here's a curated path to shorten the learning curve: AI courses by job role.
Bottom line
Allegations of false or AI-influenced citations in a high-cost health report are a signal for stronger controls, not a reason to stall progress. Tighten procurement, force transparency on AI use, and make reference verification a non-negotiable step. Trust comes from proof-check the sources before you act on the recommendations.
Your membership also unlocks: