N.L. NDP demands strict AI rules to restore trust after Deloitte health report used fake citations

N.L.'s NDP demands strict AI rules after fake citations in Deloitte's $1.6M health report. Leader Jim Dinn says trust and real consultation must come first.

Categorized in: AI News Healthcare
Published on: Nov 25, 2025
N.L. NDP demands strict AI rules to restore trust after Deloitte health report used fake citations

NDP pushes for AI rules after errors found in Deloitte healthcare report

Newfoundland and Labrador's NDP is urging the provincial government to bring in strict rules on the use of artificial intelligence after fabricated citations were discovered in a $1.6-million Health Human Resources Plan prepared by Deloitte.

NDP Leader Jim Dinn called the use of AI in government reports "disgusting," saying it undermines confidence in efforts to fix healthcare. "The solutions should come from real people - full stop," he said, pointing to the need for consultation with frontline staff rather than machine-generated content.

The Department of Health and Community Services said Deloitte acknowledged the errors, stands by the report's conclusions, and will conduct a full review of its citations. Questions remain over whether the province will seek a refund or set new AI policies for government-commissioned work.

Why this matters to people working in healthcare

Policy reports shape staffing, funding, and care models. If citations are fake or sources are fabricated, decisions can drift away from what patients and providers actually need.

AI can speed up drafting, but it can also produce confident-sounding errors. In a healthcare system already stretched, trust in the evidence used to make decisions is not optional-it's the baseline.

Context beyond Newfoundland and Labrador

In October, Deloitte was found to have used AI in a report for the Australian government and issued a partial refund. That case prompted warnings that public money should pay for verified expertise, not automated output.

For background, see reporting in The Guardian on the Australian case: The Guardian Australia coverage.

What Dinn is asking for

Dinn wants proactive AI regulations to restore trust and ensure government work is based on real consultation. He argues fabricated or unverified content could distort how decisions are made and erode trust among healthcare workers and the public.

He also pointed to recent provincial controversies and said rules should prevent taxpayer funds from being used on reports that rely on AI to "give the illusion government is prepared to fix the problems."

What effective AI rules could look like for government reports

  • Mandatory disclosure: Clear statements on where and how AI was used in any report or analysis.
  • Source verification: No AI-generated citations. Every source must be human-verified and publicly checkable.
  • Human accountability: Named subject-matter experts sign off on methods, citations, and conclusions.
  • Audit trails: Retain drafts, prompts, datasets, and literature logs for independent review.
  • Procurement clauses: Contracts require disclosure of AI use, verification standards, and penalties for false citations.
  • Independent review: Third-party audits for major reports that inform funding, staffing, or service design.
  • Data protection: Prohibit feeding confidential or patient-related information into external AI systems.
  • Enforcement: Financial penalties, public reporting of breaches, and suspension from future contracts.

What you can do now inside your organization

  • Ask vendors to declare any AI use, including tools, prompts, and the sections affected.
  • Require a citation audit: spot-check references, follow links, and verify journal indexing (PubMed, Scopus, etc.).
  • Set an internal policy: where AI can help (summaries, formatting) and where it can't (evidence synthesis, citations, conclusions).
  • Use a red-flag checklist: broken links, journals that don't exist, author names that don't appear in databases, or references that don't match quoted claims.
  • Keep human-in-the-loop: clinical leaders, health workforce planners, and data analysts must review and sign off on any report used for decisions.
  • Document everything: maintain records of literature searches, expert interviews, and data sources.

Quick checklist for procurement and leadership

  • Insert AI disclosure requirements into all RFPs and contracts.
  • Specify verification standards for literature and data.
  • Define penalties for false or unverified citations.
  • Require named experts and their credentials on the cover page.
  • Include an audit right and the ability to publish non-compliance.
  • Schedule independent reviews for high-impact deliverables.
  • Mandate privacy and security controls for any tool used.

Government reference for public-sector AI use

For a practical model, review the Government of Canada's approach to automated decision-making and risk assessment in public services. While federal and provincial contexts differ, the principles are useful: Directive on Automated Decision-Making.

Bottom line

Healthcare teams need decisions rooted in verified evidence and real consultation with people who do the work. AI can help with efficiency, but it can't replace expert judgment, transparent sourcing, and accountability.

Clear rules, strong contracts, and human oversight will protect patients, providers, and public trust-exactly what our system needs right now.

Optional training for safe, effective AI use

If your team needs baseline training on safe AI practices, auditing methods, and policy set-up, see these curated resources: AI courses by job role.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide