Budget 2025: Federal government fails workers on AI
Budget 2025 signals a push to embed AI across government operations without the protections workers and the public deserve. The result is predictable: rushed deployments, weaker services, and growing risk to equity-deserving people.
If you work in government, this isn't abstract. It changes how decisions get made, how service is delivered, and who keeps their job.
What's at stake for public servants
- Job loss from automation without clear redeployment plans or training.
- Invasive monitoring and surveillance tools that blur the line between performance management and tracking.
- AI, not people, making employment and service decisions that affect livelihoods.
- Bias and discrimination embedded in data and models, hitting equity-deserving people hardest.
- Degraded service quality as users get funneled into chatbots and opaque automated systems.
The low road signaled in Budget 2025
The budget leans on replacing workers with AI without meaningful consultation, transparency, or accountability. That risks hollowing out expertise inside departments and handing critical decisions to black-box systems.
Memorandums of understanding with major vendors give corporations insider status on tech adoption. That's a shortcut to privatization: public funds flow to corporate profits while internal capability shrinks, standards slip, and long-term control is lost.
When tech firms become embedded in public systems, they push for weaker oversight and lower bars for deployment. Service users pay the price with longer wait times, fewer human channels, and fewer avenues to appeal decisions.
Public digital infrastructure: use the funding to protect the public
Budget 2025 adds $125.6 million for sovereign public AI infrastructure, bringing the total to $925.6 million over five years. That's real money. It should focus on public sector needs first.
Sensitive health, education, tax, and immigration data should live on publicly owned servers. Pair this with investments in publicly owned and operated compute, storage, and MLOps tooling to reduce vendor lock-in, improve transparency, and cut lifecycle costs.
Public infrastructure also enables stronger accountability: clearer audits, environmental targets for compute, and open standards that keep exit options on the table.
Measuring AI's impact on work
The budget funds Statistics Canada with $25 million over six years and $4.5 million annually to launch an Artificial Intelligence and Technology Measurement Program (TechStat). That's a step forward.
Unions should co-develop the data collection methods so we capture real impacts on jobs, health and safety, and workplace equity. Government, unions, and employers need formal working groups to align on planning, policy, and programs that protect workers through technological change.
What government teams can do now
- Pause workforce cuts tied to AI until impact is proven. Require an Algorithmic Impact Assessment and a Privacy Impact Assessment before deployment. See the Government of Canada's AIA guidance here.
- Keep a human channel open. For benefits, immigration, taxation, and HR decisions, mandate human-in-the-loop review and a clear appeal process with published service standards.
- Consult unions early and in writing. Build Worker Impact Assessments that include job redesign, training, and redeployment plans before procurement.
- Create an internal AI system registry. Publish plain-language summaries, data sources, intended use, known limits, and contacts for oversight. Update it with every model change.
- Protect data sovereignty. Store sensitive datasets on publicly owned infrastructure. Prohibit vendors from training models on public data. Enforce key management, encryption, and strict access controls.
- Set procurement guardrails. Every MOU and contract should include transparency obligations, conflict-of-interest clauses, bias testing requirements, independent audit rights, open standards, and strong exit clauses.
- Run equity and bias audits before and after launch. Use representative datasets, involve affected communities, and test accessibility. Publish results and remediation timelines.
- Invest in public digital infrastructure. Build shared compute and model hosting that departments can use without defaulting to vendor stacks. Favor open-source tooling where feasible and link capacity to clean energy targets.
- Build internal capability. Train product owners, policy analysts, service designers, and HR on AI literacy, evaluation, and safe adoption. If your team needs structured upskilling, see courses by job at Complete AI Training.
- Establish incident reporting and red-teaming. Treat AI failures like security incidents: report, fix, learn, and publish. Align with the Directive on Automated Decision-Making requirements.
Bottom line
AI can support better services, but not by replacing people first and asking questions later. Use the budget to build public infrastructure, protect sensitive data, and put worker and community safeguards at the center.
Lead with transparency, involve unions and affected communities, and measure what matters. That's how you keep service quality high and public trust intact.
Your membership also unlocks: