Want Public Trust on AI, Ottawa? Show Your Prompts

Canada used AI to analyze its 350-page consultation, but trust rests on one fix: show the prompts. Share settings, workflow, and review steps to rebuild confidence.

Categorized in: AI News Government
Published on: Feb 10, 2026
Want Public Trust on AI, Ottawa? Show Your Prompts

Canada's AI Strategy Consultation Used AI. Trust Now Hangs on One Simple Step: Show Your Prompts.

The federal government released the results of its national AI strategy public consultation. The document runs nearly 350 pages of recommendations. Innovation, Science and Economic Development Canada (ISED) used an AI-enabled workflow, alongside human review, to cluster and summarize submissions into themes.

That's progress. But something critical is missing: how the analysis was done, how recommendations were weighted, and-most importantly-what prompts were used to guide the AI.

What's Missing (and Why It Matters)

ISED disclosed the tools it used, but not the prompts or settings that shaped the outputs. Without that, public servants, researchers, and Canadians can't evaluate the reliability of the process or replicate it.

As one observer put it: "The methodological details of how this analysis was conducted are so scant, if the feds are hoping to court public trust with the new AI strategy this is a bad start," said Blair Attard-Frost, Amii fellow and University of Alberta assistant professor.

Trust in government AI is already shaky. An Abacus Data survey found 52 percent of Canadians don't trust the federal government to oversee AI in a way that protects the public. Transparency is the fastest lever to move that number.

The Simple Fix: Publish the Prompts

In government, accountability isn't optional. If AI helped synthesize public input, the public deserves to know the instructions that guided it. Publishing prompts won't expose personal data if handled correctly. It will show your work, reduce suspicion, and enable meaningful oversight.

What to Publish Now

  • Prompt templates: system prompts, user prompts, and any instructions used for clustering, summarizing, or deduplicating.
  • Model details: model names and versions, parameters (e.g., temperature), and any tools/plugins invoked.
  • Workflow maps: how prompts were chained, in what order, and where human review occurred.
  • Evaluation method: sample sizes, human review rubric, how disagreements were resolved, and known failure modes.
  • Weighting logic: how themes were scored or prioritized, including thresholds or heuristics.
  • Change log: versions, dates, and a record of prompt updates across the consultation period.

How to Do It Safely

  • Redact sensitive content: remove any personal information or details tied to specific submissions.
  • Share templates, not raw data: publish the instructions and settings, while keeping underlying submissions protected.
  • Centralize access: host prompts and documentation in a public repository with version history.
  • Align with policy: map your disclosure to the Government of Canada's Directive on Automated Decision-Making risk levels and documentation needs.
  • Plan for audit: define a reproducibility protocol so an independent team can rerun the process on a test set.

Why This Helps Government Teams

  • Builds public trust quickly with tangible openness.
  • Reduces ATIP friction by proactively answering how decisions were informed.
  • Improves internal quality by making workflows reviewable and repeatable across departments.
  • Sets a standard others can follow for consultations, grants, procurement, and program design.

Immediate Next Steps

  • Publish a "Prompts & Methods" appendix for the consultation results within two weeks.
  • Commit to a brief, independent peer review of the AI-enabled workflow and share the findings publicly.
  • Adopt a standing rule: if AI informs policy, its prompts, settings, and review process are documented and disclosed (with appropriate redactions).

Resources

Upskill Your Team on Prompts

If your department is using AI to analyze submissions or draft summaries, staff need shared standards and repeatable prompt patterns. A little training prevents a lot of rework.

Using AI to organize a national AI consultation makes sense. Now close the loop: show your prompts, show your process, and earn the trust you need to move this file forward.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)