Ottawa Outlines Six Pillars for Long-Delayed National AI Strategy
The federal government has revealed the framework for its national AI strategy, a document promised repeatedly but never delivered. The six pillars, disclosed in the spring economic update, signal how Ottawa plans to approach artificial intelligence development, safety, and adoption.
The strategy rests on protecting Canadians and democracy, empowering citizens, driving AI adoption for economic growth, building sovereign Canadian AI capacity, scaling domestic companies, and forming trusted global partnerships. The descriptions mention universal AI training access, updated privacy laws, national AI safety capabilities, and secure government systems.
Timing Remains Unclear
AI Minister Evan Solomon told Parliament in February the strategy would launch "this quarter." The spring economic update provided no launch date. The 2025 budget had promised the strategy by year-end, though it didn't specify whether it would be released then.
The government has moved ahead on related initiatives. It recently opened applications for developing Canada's sovereign AI supercomputing infrastructure, intended to advance research while protecting national interests.
Safety Questions After Tumbler Ridge
The strategy's rollout comes as Ottawa faces pressure on AI safety following the March shootings in Tumbler Ridge, B.C. The shooter's ChatGPT account was banned and flagged eight months before the attack, but OpenAI did not alert police until after the killings.
Solomon met with OpenAI CEO Sam Altman in early March. Altman agreed to include Canadian mental health and law experts in the company's safety office. Solomon also requested that experts from the Canadian AI Safety Institute conduct a full assessment of OpenAI's new safety protocols.
B.C. Premier David Eby called for minimum thresholds requiring platforms to report threats of violence to law enforcement. Last week, Altman apologized to Tumbler Ridge residents, a gesture Eby called "necessary, and yet grossly insufficient."
Provinces Acting Alone
Manitoba announced it would become the first Canadian jurisdiction to ban youth from AI chatbots including ChatGPT and Claude. B.C. Attorney General Niki Sharma said stronger guardrails are needed to protect people from online harms, but argued that federal leadership is essential for such measures to be effective.
Heritage Minister Marc Miller said the government is "very seriously" considering restrictions on young Canadians' access to social media and AI chatbots, but has not decided.
For government professionals managing AI adoption or policy, understanding these pillars and the safety debates surrounding them is essential. AI Learning Path for Policy Makers offers guidance on AI governance and policy analysis for public sector decision-making.
Your membership also unlocks: