Deloitte's $1.6M N.L. health report cites studies that don't exist, raising new questions about AI

$1.6M N.L. health plan by Deloitte cites studies that don't exist; authors disavow work credited to them. Officials must verify sources, disclose AI use, and fix the record.

Published on: Nov 23, 2025
Deloitte's $1.6M N.L. health report cites studies that don't exist, raising new questions about AI

AI-linked citation errors surface in $1.6M N.L. Health Human Resources Plan

Newfoundland and Labrador's Health Human Resources Plan, produced by Deloitte and published in May, contains multiple citations that do not appear to exist. The report, which cost the province nearly $1.6 million, is now under scrutiny for referencing research that authors say they never conducted and papers that cannot be found in academic databases.

This is the second major government-commissioned report in recent months to face questions about fabricated sources. The pattern raises a bigger issue for public-sector leaders: how third-party vendors are using generative AI in policy work-and how governments verify the evidence that informs decisions.

What happened

The 526-page plan, commissioned to help address long-standing nurse and physician shortages, uses literature citations to justify key recommendations on recruitment, retention incentives, virtual care, and pandemic impacts on health workers. At least four citations appear false or unverifiable.

  • A citation attributed to Martha MacLeod and colleagues-titled "The cost-effectiveness of a rural retention program for registered nurses in Canada"-was called "false" and "potentially AI-generated" by MacLeod. She says her team never conducted a cost-effectiveness analysis and never had the financial data to do so.
  • Another citation-"The cost-effectiveness of local recruitment and retention strategies for health workers in Canada"-was flagged by named co-author Gail Tomblin Murphy as nonexistent. She also noted the listed team composition didn't match reality and warned that heavy AI use may be involved.
  • A third citation claims Canadian registered respiratory therapists in acute care reported increased workload and stress during COVID-19. The provided link leads to an unrelated article, and the cited paper cannot be found on the journal's site or in academic search engines.

Why this matters for government, healthcare, and HR leaders

Policy credibility depends on verifiable evidence. Fabricated or misattributed citations don't just weaken a report-they can warp recruitment strategy, funding priorities, and workforce planning. In healthcare, that risk translates directly to patient access and safety.

The report also recommends expanding the use of generative AI in clinical decision support and operational analytics. Without strict governance, disclosure, and verification, the same tools meant to accelerate insight can quietly introduce untraceable errors into core decisions.

Not an isolated incident

Last month, Deloitte's Australian arm faced public criticism after a government report included apparent AI-generated errors, fabricated quotes, and references to nonexistent research. The firm partially refunded the government and later disclosed use of Azure OpenAI in the work, while stating AI did not affect the report's substantive outcomes.

What to do in the next 30 days

  • Freeze reliance on contested sections of the report until all citations are independently verified.
  • Stand up a rapid review group (policy, clinical, HR, legal) to audit every literature-backed claim, starting with those tied to funding, incentives, or staffing models.
  • Require Deloitte to provide DOIs, URLs, PDFs, and author confirmations for every citation that informs recommendations.
  • Document all discrepancies and set a deadline for correction, replacement evidence, or formal retraction.
  • Prepare contingency guidance for recruitment and retention programs so frontline teams aren't left waiting.

Procurement and vendor management: immediate guardrails

  • AI disclosure clause: Vendors must declare where, how, and to what extent generative AI was used, including prompts, models, and human review steps.
  • Evidence chain-of-custody: Every claim tied to literature must include DOI, link to publisher/journal, and a copy or citation note accessible to the client.
  • Third-party audit rights: Reserve the right to audit citations and analysis methods; include penalty/refund terms for false or unverifiable sources.
  • Human-in-the-loop: Require named subject-matter experts to sign off on evidence quality and applicability.
  • Reproducibility standard: For quantitative claims, vendors must provide data sources, assumptions, and calculation templates.
  • Model risk assessment: If AI supported analysis, request a brief risk summary: data provenance, bias checks, limits, and validation steps.

Verification checklist for evidence-backed claims

  • Confirm each citation via publisher site, DOI (Crossref), PubMed/Google Scholar, or the journal's archive.
  • Match author names, titles, journal, volume/issue, pages, and publication year.
  • Check that the cited paper actually supports the stated claim (not just tangentially related).
  • If a link resolves to a different article, treat the claim as unverified until corrected.
  • For grey literature (reports, white papers), verify organizational source, publication date, and accessibility.

Governance to reduce AI-related policy risk

  • Adopt a public-sector AI policy that sets expectations for disclosure, validation, and human accountability. A useful reference is the NIST AI Risk Management Framework.
  • Centralize evidence standards across departments so every major report follows the same citation, verification, and audit process.
  • Build internal capability-train policy, HR, and clinical leaders on AI use cases, limits, and verification methods. See practical options here: Latest AI courses.
  • Publish disclosures on government sites when AI is used in commissioned work, including the level of usage and verification steps taken.

What leaders and stakeholders are saying

Gail Tomblin Murphy warned that the presence of invented or misattributed citations suggests heavy AI use and highlighted the need for validated evidence in public reports.

NDP Leader Jim Dinn called the situation "disgusting," stressing that confidence in healthcare is already strained and that flawed evidence can affect real lives.

Current status in N.L.

As of Nov. 22, the Health Human Resources Plan remains on the provincial website without an AI-use disclosure. Questions to the Department of Health and Community Services and to Premier Tony Wakeham's office about verification plans, refund considerations, and AI policy have not been answered publicly.

In June, Deloitte was selected to conduct a core staffing review of nursing resources, expected in the spring. Given recent findings, leaders should set clear evidence standards-and enforce them-before accepting further deliverables.

Bottom line

Evidence is infrastructure. If the citations aren't real, the recommendations aren't reliable. Put disclosure, verification, and accountability into your contracts now, and upskill your teams to spot problems before they shape policy.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)