PM Carney's AI push meets union resistance as Ottawa warns of some public service job cuts

Ottawa pushes AI in operations to meet efficiency targets, with retraining amid job-risk concerns. Ops leaders should pilot low-risk uses, add guardrails, and keep humans involved.

Categorized in: AI News Operations
Published on: Sep 14, 2025
PM Carney's AI push meets union resistance as Ottawa warns of some public service job cuts

AI in Ottawa's Operations: Efficiency Push, Real Job Risks, and the Playbook Ops Teams Need

Ottawa's chief data officer, Stephen Burt, expects artificial intelligence to lead to some job cuts across the federal public service. The scope is unclear and will vary by role, but the stated goal is to retrain and reassign people where possible. For operations leaders, that means preparing for process change, skills shifts, and tighter controls at the same time.

The government is pressing for efficiency. The prime minister campaigned on using AI to improve the public service, and departments have been asked to find 15 per cent program cuts over three years. Ottawa also signed an agreement with Canadian AI company Cohere to find places where AI can improve operations, and it plans a public registry to track AI use, though there's no launch date yet.

What's already in motion

  • AI is already used for satellite imagery analysis, weather forecasting, predicting tax case outcomes, and sorting temporary visa applications.
  • AI is positioned as one tool among many to improve efficiency and focus across government.
  • A public registry is planned to keep Canadians informed about AI projects and provide internal tracking.

Where job impact could land

Burt did not name specific areas at risk, but the impact will be job-specific. Expect routine and repetitive tasks to be automated first, with augmentation (not replacement) more common in complex work. The pressure will be on leaders to redeploy people, not leave them idle.

Union and expert cautions

Public Service Alliance of Canada president Sharon DeSousa argues AI isn't a shortcut to better services and warns that broad cuts mean fewer services for people who need them. She called for consultation with unions and front-line workers before rolling out AI across government.

Sean O'Reilly of the Professional Institute of the Public Service of Canada says consultation is often after the fact. He supports using AI to remove mundane tasks but worries about losing human judgment and jobs.

McMaster University professor Catherine Connelly says Canadians remember Phoenix and ArriveCan. She advises against using AI where there's liability risk or for hiring decisions, stressing that AI is a poor substitute for human decision-making in those areas.

Guardrails you must factor in

  • The federal Directive on Automated Decision-Making requires an Algorithmic Impact Assessment for systems that can significantly affect people. These assessments are published in a public register. See the directive and AIA details.
  • Transparency is expected. Clear communication with employees and meaningful engagement with unions will reduce friction.

90-day ops playbook

  • Map processes and classify tasks: automate, augment, keep human. Tag each with risk, data sensitivity, and expected ROI.
  • Pick 2-3 low-risk pilots with measurable outcomes (cycle time, backlog reduction, error rates). Use human-in-the-loop, audit logs, and clear rollback plans.
  • Stand up an AI review board: product owner, privacy, security, legal, HR, union liaison, and QA. Define RACI and sign-offs.
  • Procurement checklist: data residency, access controls, logging, model provenance, bias testing, explainability, resilience, exit clauses.
  • Compliance by design: run the Algorithmic Impact Assessment early, document human oversight points, and prepare public registry entries.
  • Data discipline: inventory datasets, minimize PII exposure, set retention rules, add redaction and monitoring for shadow AI use.
  • Workforce plan: role-by-role impact scenarios, reskilling paths, internal mobility routes, and time protected for training.
  • Change management: crisp comms, FAQs, office hours, and a feedback loop. Set expectations that AI assists, not replaces, judgment.
  • Metrics and guardrails: service quality, fairness checks, model drift monitoring, incident playbook, and post-implementation reviews.

Good candidates to automate first

  • Document classification, routing, and triage for high-volume queues.
  • Drafting standard correspondence and summarizing case files with human review.
  • Scheduling, capacity planning, and workload balancing.
  • Analytics dashboards with anomaly alerts on service levels and error patterns.
  • Code and script generation for internal tooling, reviewed and tested by engineers.

What to avoid for now

  • Decisions with legal or liability exposure without strong oversight and a completed AIA.
  • Hiring, promotions, or any use that risks discrimination claims.
  • Opaque systems that cannot explain outputs or be audited.
  • Use cases where an error's cost outweighs time saved.

Communication that reduces risk

  • Publish an AI charter: what you will use AI for, what you won't, and who is accountable.
  • Share impact scenarios early. Be explicit about retraining and job transition supports.
  • Provide fast channels for issues and feedback, and report back on fixes.

Upskilling for operations teams

Ops leaders who invest in structured training will move faster with fewer mistakes. For role-based learning and automation-focused paths, see Courses by Job and AI Automation Certification.

The signal is clear: efficiency targets are here, and AI will be part of the toolkit. Start with small, low-risk wins, keep humans in the loop, and communicate every step. The teams that plan for skills shifts and strict governance will protect service quality while hitting cost goals.