Deloitte faces fresh AI backlash over $1.6M Canadian healthcare report, weeks after Australian refund

A $1.6M N.L. healthcare report faces scrutiny over AI-linked citation errors. Health leaders should demand disclosure, verify sources, and tie payment to evidence quality.

Categorized in: AI News Healthcare
Published on: Nov 28, 2025
Deloitte faces fresh AI backlash over $1.6M Canadian healthcare report, weeks after Australian refund

AI-linked citation errors in $1.6M Newfoundland and Labrador healthcare report: what healthcare leaders should do next

A 526-page healthcare report commissioned by Newfoundland and Labrador for nearly $1.6 million has come under scrutiny for AI-linked citation issues, as reported by Fortune. The report was delivered in May 2025 and covers pandemic impacts on staff, retention incentives, and virtual care.

Deloitte Canada said it stands by the recommendations and is "revising the report to make a small number of citation corrections, which do not impact the report findings." The firm added, "AI was not used to write the report; it was selectively used to support a small number of research citations."

This follows an October incident in Australia, where Deloitte issued a partial refund on a government engagement after errors tied to generative AI. For health systems relying on vendor analysis to shape workforce strategy and care models, the signal is clear: evidence quality needs active oversight.

What went wrong

  • Citations used in cost analyses were attributed to fictional academic papers.
  • Authors were credited for papers they did not write.
  • Co-authorships were cited between researchers who never worked together.

Why this matters for healthcare

Staffing models, incentive programs, and virtual care plans depend on accurate evidence. Bad citations can skew cost projections, misdirect funding, and stall needed interventions. The risk isn't theoretical-it can ripple into patient access and workforce morale.

Procurement guardrails for research and strategy work

  • Require full AI disclosure: where it was used, which tools/models, and internal controls applied.
  • Demand a citations appendix with DOIs or PMIDs and live links; run a 5-10% randomized spot check.
  • Include a "no fabricated sources" warranty and the right to withhold payment until verification passes.
  • Ask for a versioned evidence folder (studies, data extracts) and reproducible analysis files.
  • Mandate human expert review and named sign-offs with credentials.
  • Set penalties for inaccuracies: remediation timelines, fee reductions, and impact on vendor eligibility.

Quick checks your team can run

  • Search every citation on PubMed or Google Scholar; confirm the paper exists and matches title, journal, and year.
  • Verify author lists via journal sites or ORCID; spot mismatches in minutes.
  • Follow the DOI link; dead or mismatched links are a red flag.
  • For cost analyses, trace each claim back to a primary source-not a secondary summary.

If AI is used, what "good" looks like

  • Transparent disclosure and logs showing where AI supported search or summarization.
  • All AI outputs cross-checked against primary sources by human reviewers.
  • Consistent citation style, working links, and no unverifiable references.

Context to keep in view

Regulators in several countries have fined and sanctioned Deloitte over the past five years for auditing issues. That broader record has increased scrutiny on the firm's quality controls and use of automation in client work.

Action plan for health leaders this quarter

  • Update RFP templates to include AI disclosure, verification steps, and warranty clauses.
  • Stand up a lightweight internal citation audit: 1-2 clinicians or analysts, two hours per major deliverable.
  • Prioritize sources with DOIs/PMIDs; de-prioritize claims that rely on untraceable references.
  • Train managers to spot AI "tells" in reports: generic phrasing, inconsistent references, dead links.
  • Document and escalate issues fast; require corrected deliverables before implementation decisions.

If you're building team literacy on safe, verifiable AI use in knowledge work, consider curated options by role here: Complete AI Training - Courses by Job.

Bottom line

Use vendors, use AI-just don't outsource your standards. Demand transparency, verify the evidence, and tie quality to payment. That's how you protect staff, budgets, and patients from bad footnotes.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide