NL government signals role in AI citation errors; stricter reviews and new Education Accord draft due in new year
Newfoundland and Labrador's Education Ministry is acknowledging responsibility signals for AI-generated citation errors in the province's Education Accord and says a corrected draft will be released in the new year. The department called the inaccurate references "unacceptable" and committed to strict review, human verification, and transparent quality controls before publication.
What happened
The Education Accord, a 418-page roadmap for K-12 and post-secondary reform, was released in August after 18 months of work led by co-chairs Karen Goodnough and Anne Burke of Memorial University's Faculty of Education. In September, Radio-Canada reported at least 15 fake or non-existent citations that appeared to be produced by AI, prompting the government to pull the $755,000 report from its website.
At the time, then-Education Minister Bernard Davis said the fake citations did not affect the report's recommendations. Goodnough and Burke told CBC they didn't know where the errors originated and assumed they occurred within government processes.
Who's responsible now-and what's changing
Education and Early Childhood Development's Media Relations Manager, Lynn Robinson, said the province has hired Labrador-based Brack and Brine to review all references and citations. The department says errors will be removed and the integrity of the report upheld, with a revised draft to be released publicly in the new year.
Second AI-related citation issue surfaces
Earlier this month, another government-commissioned report-the $1.6 million Health Human Resources Plan developed by Deloitte-was found to contain at least four citations that do not exist. Two major reports with AI-related citation issues in under three months have intensified calls for clear policies, stronger vendor oversight, and better verification workflows.
Government response: oversight, training, and verification
Government Services Minister Mike Goosney said the public service needs oversight, accountability, and training for responsible AI use. He noted in-person training on Microsoft Copilot, department courses on responsible AI, and an online AI support hub for employees.
Goosney added that all AI use will be subject to strict review, human verification, and transparent quality controls in collaboration with the Office of the Information and Privacy Commissioner. Learn more about the OIPC here: OIPC NL.
Why this matters for education, government, and HR
Policy credibility depends on trustworthy evidence. Citation errors undermine public trust, risk misinformed decisions, and waste budget. For leaders, this is a signal to tighten AI governance, update procurement language, and upskill teams on verification methods.
Immediate actions for leaders
- Publish a clear AI use policy for documents, research, and reports. Specify allowed tools, prohibited practices, and approval paths.
- Require human-in-the-loop verification for every claim, citation, and data point. Make it someone's job and track sign-off.
- Ban AI-fabricated citations. Use automated checks plus manual validation to confirm sources exist and say what's claimed.
- Log AI tool usage: who used what, when, and for which sections. Keep version history and prompts where feasible.
- Add an "evidence appendix" listing full references with working links or DOIs, and a validation checklist.
- Define risk tiers. Higher-risk reports (policy, budget, health, education) need stricter review and external audit.
- Update vendor contracts to require disclosure of AI use, data sources, and verification methods. Add warranties and penalties.
- Budget time and dollars for QA. Rushed reviews are the fastest route to public corrections and reputational damage.
- Train staff on AI strengths and failure modes, including hallucinations and citation pitfalls.
- Benchmark against public-sector standards like the Directive on Automated Decision-Making: Treasury Board of Canada.
If your team needs structured upskilling, explore role-based programs: Complete AI Training - Courses by Job.
Contracting checklist for AI use in commissioned reports
- Mandatory disclosure of all AI tools used and where they were used (sections, drafts, edits).
- No synthetic references. Vendor certifies that every citation exists and is accurately represented.
- Source provenance: provide links/DOIs, access dates, and copies of key sources where licensing permits.
- Deliverables include raw research notes, citation exports, validation logs, and change history.
- Independent spot checks by the client or a third party before final acceptance and payment.
- Privacy safeguards: PII handling, data retention limits, and approval for any external processing.
- Indemnification and clawbacks for fabricated or misrepresented evidence.
Education Accord: what to expect next
The Education Ministry says Brack and Brine are reviewing references now, with a public draft promised in the new year. Stakeholders will expect an errata summary, a clear list of corrections, and specifics on the new verification process.
With Paul Dinn now serving as Education Minister after previously questioning the need for another study, the bar for transparency and speed to action is high. The focus should shift from fixing citations to implementing the strongest recommendations-backed by verified evidence.
Bottom line
AI can speed up drafting, but accountability never moves. Put guardrails, training, and contracts in place so reports stand up to scrutiny-and make sure vendors play by the same rules your teams do.
Your membership also unlocks: