AI in the Public Service: Privacy, Transparency, and Jobs - PIPSC Calls for In-House Development
As Ottawa moves to embed artificial intelligence across departments, PIPSC President Sean O'Reilly is sounding the alarm on three fronts: privacy, transparency, and job security. His ask is simple: build key AI capabilities inside government, not just with private vendors.
The government's partnership with Cohere is framed as a push for efficiency. O'Reilly's position is that sensitive public data should be managed by public servants, with clear oversight, auditability, and Canadian accountability. "We have the expertise, we have the knowledge," he says.
Why PIPSC is pushing for in-house AI
Data sovereignty and control sit at the core of the union's stance. If AI models train on or process sensitive records, departments must be confident about where the data lives, who can access it, and how it's used over time.
O'Reilly is also pressing for a public registry of AI systems used by federal institutions. This would give Canadians and employees visibility into where AI is deployed, its purpose, and the level of human oversight.
Transparency is policy - but practice matters
The Treasury Board Secretariat says bargaining agents and other stakeholders were consulted on the federal AI strategy and on updates to the Directive on Automated Decision-Making. That's encouraging on paper. The issue, from PIPSC's view, is day-to-day inclusion as real systems get built and procured.
If your team is piloting or scaling an AI tool, treat transparency as a deliverable, not a talking point. Publish the use case, human-in-the-loop controls, data flows, and risk ratings. The directive is here: Directive on Automated Decision-Making (TBS).
Will AI replace public servants? The honest answer: it depends
History shows automation can expand work, not just reduce it. As Carleton University's Majid Komeili notes, robotics in manufacturing eventually led to growth and new roles. But AI is different: it targets cognitive tasks that many public service roles perform daily.
Jobs that use AI to augment judgment and service delivery will likely evolve and stay valuable. Jobs reduced to button-clicking stand at risk. O'Reilly, an IT professional, backs augmentation: "How can we make a public servant better with the AI to better serve the Canadian public?"
What this means for your branch right now
- Define "human-in-the-loop" for every AI use case. Spell out who decides, who reviews, and when a human must override.
- Classify data early. Set rules for residency, retention, and model training. No ambiguity on personal or sensitive data.
- Log decisions and outcomes. You need audit trails to explain, challenge, and improve automated recommendations.
- Stand up a simple, public-facing registry entry for each AI system: purpose, datasets, oversight, contacts.
Procurement checklist for AI pilots
- Data residency in Canada for all sensitive datasets and logs.
- No vendor training on your data without explicit, written approval.
- Full audit rights, model version history, and clear rollback paths.
- Bias testing, error rates, and performance metrics disclosed upfront.
- Human override for any decision that affects eligibility, benefits, or compliance.
- Security reviews aligned to departmental standards and the directive.
Build vs. buy: a practical split
- Build in-house for systems touching citizen data, benefits eligibility, tax, compliance, or security-sensitive workflows.
- Buy or partner for low-risk productivity tooling (search, drafting, summarization) with strict data controls and logging.
- Use pilots with tight scopes and sunset dates. Expand only with evidence: service quality up, risks down.
Guardrails for automated decisions
- Require Algorithmic Impact Assessments before deployment and at major updates.
- Mandate plain-language notices to the public where AI is used in decisions.
- Provide clear appeal and escalation paths to a human decision-maker.
- Monitor for drift: accuracy, bias, and complaint trends over time.
Closing the collaboration gap
PIPSC says it's ready to help shape responsible adoption, including retraining for roles that will change or phase out. The government says consultation is ongoing. The path forward is obvious: put unions, privacy, policy, IT, and program owners in the same room early - before contracts are signed and systems go live.
If you manage a team, start building the talent side now. Identify roles to augment, plan reskilling, and set clear guidelines for safe use. For structured learning by role, see: AI courses by job.
Bottom line
AI can raise service quality and speed - if it's built with public values at the core. Keep sensitive work in-house where it counts, make usage visible, and put humans firmly in control of decisions that affect people's lives.
Your membership also unlocks: