NWT skips standalone AI policy as experts call for guardrails

NWT won't craft a standalone AI policy, leaning on a 2025 guideline and current rules instead. Backers say it's fine; critics warn of privacy, oversight, and accountability gaps.

Categorized in: AI News Government
Published on: Jan 24, 2026
NWT skips standalone AI policy as experts call for guardrails

NWT won't create a standalone AI policy - what that means for public servants

The Government of the Northwest Territories says it has no plan to develop a dedicated AI policy. Instead, it will rely on a high-level generative AI guideline released in May 2025 and existing information management rules.

Some see that as enough for now. Others say it leaves gaps in accountability, privacy, and oversight - especially as AI use grows inside government.

Where the GNWT stands

Finance Minister Caroline Wawzonek said the territory isn't adding more policy on top of what already exists. She emphasized confidence in cybersecurity practices, ongoing training, and the secure version of Microsoft Copilot available to staff.

The Department of Finance says employees have an internal "AI Hub" with the guideline, general training, and Copilot training. Staff are expected to review AI outputs before use. The GNWT has not completed its own privacy impact assessment for AI tools, but says it is leveraging assessments from other jurisdictions and conducting legal reviews of terms for common tools, including Copilot.

What the guideline says

The generative AI guideline focuses on guardrails and transparency, recommending that departments:

  • Establish clear rules and responsibilities for using and managing generative AI.
  • Put safeguards in place to protect data and manage risks.
  • Provide clear information on why, how, and when AI is used.
  • Monitor AI tools and outcomes.

It references federal guidance and links to territorial privacy and information policies. For context, see the Government of Canada's generative AI guide.

Pilots underway

The GNWT is testing an AI-powered medical note-taking tool to cut paperwork for healthcare staff. The Department of Justice has also piloted AI for quick rough transcripts from court proceedings, limited to court services staff and judges.

Risks flagged

AI can speed up routine work and surface patterns fast. It can also create privacy risk, reinforce bias, and produce confident but wrong answers (hallucinations). Overreliance may dull critical thinking. Compute use has environmental costs.

Recent examples underline the point: NWT Fire criticized an AI-generated image spread online during a wildfire near Fort Providence, calling it "sensationalized slop." In 2025, Newfoundland and Labrador faced backlash after reports produced by a consultant included fake citations likely generated by AI.

Is the guideline enough?

Inside government, not everyone is convinced. One GNWT employee said the guideline reads like "suggestions," and existing security and privacy policies weren't written with AI in mind. They worry about sensitive data exposure and unclear oversight.

Andrew Fox, the NWT Information and Privacy Commissioner, called the high-level guideline a positive start. Still, he said it's more a statement that policies should exist than policies that can be applied to specific AI cases, and that the GNWT needs additional, tool-specific guidance.

Teresa Scassa, Canada Research Chair in Information Law and Policy, said it's "really important" to have rules for employee AI use. She described the GNWT guideline as unfocused, mixing departmental deployments with ad-hoc employee use. She noted it lacks clear chains of responsibility and doesn't specify that only vetted, approved tools may be used. She also urged engagement with Indigenous communities on culturally sensitive data.

Union of Northern Workers president Gayla Thunstrom raised concerns about using AI for culturally sensitive content. The union does not support technology replacing people or being used to backfill vacancies. It wants clarity on how AI-related errors affect performance reviews, what models drive approved tools, and how AI's carbon footprint will be tracked in emissions reporting.

AI rules elsewhere in the territory

The City of Yellowknife has no formal AI policy yet but plans to develop guidelines this year. Current access is limited to safe web services and governed by existing privacy, records, and cybersecurity rules.

Aurora College has AI guidelines that apply to students, staff, faculty, and contractors, with statements in course syllabi.

The Law Society of the NWT issued generative AI guidelines in January 2025, reminding lawyers to meet professional standards while adopting new tools. The Supreme Court of the NWT advised in October 2025 that AI-generated submissions must be verified, and parties must disclose when AI was used.

What departments can do now

  • Approve tools explicitly. Limit use to vetted, enterprise versions with clear data handling and retention terms.
  • Define roles. Name accountable owners for AI use in each program area and set up a simple, documented approval process.
  • Require human review. Make human-in-the-loop checks mandatory for any AI-assisted content or analysis.
  • Segment use cases. Separate low-risk tasks (summaries, formatting) from high-risk ones (legal, health, benefits decisions).
  • Protect data. Block uploads of personal, confidential, or culturally sensitive information unless the tool meets privacy, security, and treaty obligations.
  • Log usage. Keep audit trails of prompts, outputs, and decisions for oversight and ATIPP requests.
  • Test for bias and accuracy. Pilot on real workloads, measure error rates, and set thresholds for acceptance.
  • Train teams. Provide recurring training on privacy, prompt practices, and verification standards. Publish do/don't lists by role.
  • Plan for incidents. Define escalation paths, correction procedures, and public communications for AI-related mistakes.
  • Track environmental impact. Estimate compute and include AI use in emissions reporting where feasible.

Bottom line for public servants

The GNWT is holding to a light-touch approach for now. That puts more responsibility on departments to set clear approvals, controls, and audits - especially for high-stakes work.

Use the guideline as a floor, not a ceiling. Treat AI as an intern: helpful, fast, and prone to errors - always verify before you rely.

If your team needs structured upskilling, here's a curated list of AI courses by job function that can support safe, effective adoption.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide