Trust is the tipping point for AI in Canada's public service

Canadians are split on AI in services, with trust in government as the hinge. Quick wins, clear safeguards, and human oversight can boost support while speeding delivery.

Categorized in: AI News Government
Published on: Sep 23, 2025
Trust is the tipping point for AI in Canada's public service

Trust in government is the hinge for public acceptance of AI at scale

After a fourth-term victory, Prime Minister Mark Carney signalled a clear mandate: make government more productive by deploying AI at scale and focusing on results over spending. A new federal minister of artificial intelligence and digital innovation, Evan Solomon, has been appointed to push that vision across departments and service lines.

The pitch is simple: let AI take on routine work, clear bottlenecks, and speed up delivery. Unions warn that a 15-per-cent reduction in spending on government employees will erode service quality. Many observers agree technology can improve performance, but only if it is secure, accountable, and implemented with discipline.

What Canadians think right now

A May survey of 2,000 Canadians shows the public is split on AI in service delivery. Forty-two per cent support using AI to resolve bottlenecks, 32 per cent oppose, and 25 per cent are undecided. That plurality in favour is notable given the technology's novelty.

Support for reducing the size of the public service is much stronger: 67 per cent agree the federal bureaucracy should be significantly reduced, while 12 per cent disagree. Those who want a smaller public service are 22 percentage points more likely to back AI use. Among those who oppose AI, more than half also oppose reducing the public service.

Trust and ideology drive support

Trust is the lever. People who trust the federal government are more likely to support AI in service delivery. Yet those with higher trust are also 26 percentage points less comfortable with reducing the public service, even though they remain broadly supportive of some reduction.

Ideology matters. Individuals on the political right are more supportive of AI use in government by 18 percentage points compared with those on the left. The gap widens for cutting the size of the public service: a 38-point difference between right and left.

Gender, education, and where the gaps are

Gender identity is a clear factor. Women are 16 percentage points less likely than men to support government use of AI, while showing no meaningful difference on reducing the size of the public service. Among conservatives, women are less likely than men to back reductions.

Education level shows no measurable effect on support for AI or for shrinking the public service. That suggests trust, values, and perceived risk drive attitudes more than credentials.

Implications for public service leaders

Canadians will judge the agenda on two things: whether services get faster and whether government safeguards the public interest. To build acceptance, leaders need quick wins, visible guardrails, and clear communication on how jobs and service quality will be protected.

If AI improves turnaround times, reduces backlogs, and strengthens consistency, trust will rise. If implementations stall or create errors, skepticism will harden and support will fade.

What to do in the next 90 days

  • Identify 3-5 high-volume, rules-based use cases (triage, status updates, form validation, call-centre assist) with measurable service outcomes.
  • Stand up small pilots with human-in-the-loop review, clear escalation paths, and predefined service standards.
  • Complete and publish an Algorithmic Impact Assessment for each pilot and align with the Directive on Automated Decision-Making. See the Government of Canada's AIA guidance here.
  • Set data safeguards: privacy-by-design, audit logs, access controls, red-teaming for security, and bias testing before and after deployment.
  • Define success metrics: time to decision, queue size, client satisfaction, error rates, and cost per transaction.
  • Engage unions and frontline teams early; co-design workflows and clarify that AI supports, not replaces, professional judgment in critical decisions.
  • Publish plain-language public notices on where AI is used, why, how to get a human, and how to appeal.

Guardrails that build trust

  • Human oversight: Keep humans in review for medium/high-impact decisions; automate only low-risk, repetitive tasks first.
  • Transparency: Label AI-assisted interactions, disclose data sources, and document model limitations.
  • Risk management: Use an established framework like the NIST AI Risk Management Framework for consistent practice across departments. Overview here.
  • Equity checks: Test outcomes by demographic segments and service channels; fix drift fast.
  • Procurement discipline: Require vendors to meet government data residency, auditability, and incident reporting standards.

What to communicate to Canadians

  • Where AI is used today and why those areas were chosen.
  • Service improvements achieved so far (e.g., X% faster, Y fewer backlogs, Z% fewer errors).
  • How to reach a human and how to appeal a decision.
  • What data is collected, how it's protected, and who is accountable.

Workforce and capability

AI adoption requires upskilling, not just new tools. Train staff on prompt quality, oversight, privacy, security, and bias testing. Create AI product owner roles inside the business, not only in IT, to keep efforts tied to service outcomes.

If you need structured learning paths for specific roles, see practical course options by job function here.

The bottom line

Public support for AI in government is within reach, but it depends on trust and visible results. Start with low-risk, high-volume tasks, prove the gains, and report them clearly. Protect due process, keep humans in the loop for consequential decisions, and treat transparency as policy, not PR.

Do that, and AI becomes a practical way to deliver better services with fewer bottlenecks-while earning the permission to go further.