Ottawa's AI Strategy Is Coming Fast. Canadians Want Guardrails Even Faster
The federal government is preparing to launch a new national AI strategy. Public sentiment is clear: move forward, but build guardrails first.
"We see more of an inclination to want government to be a regulator of AI," said David Coletto of Abacus Data. "There's more concern than there is optimism."
A 30-Day "Sprint" That Sparked Pushback
Artificial Intelligence Minister Evan Solomon kicked off a rapid consultation in the fall. An expert group had 30 days to submit recommendations. Canadians got the same window to weigh in.
Critics called it too rushed and too industry-heavy. In response, lawyers, advocates, and academics launched a public-led "people's consultation" to create space for broader input.
"This seems like upping the ante on moving fast and breaking things," said tech lawyer Cynthia Khoo. The open letter that preceded the new consultation flagged concerns spanning environmental impacts, labour rights, mental health (including AI-triggered psychosis), inaccuracies in generative systems, privacy risks, and the surge of non-consensual intimate deepfakes.
Polling: Regulation First, Growth Second
A Leger poll in August found 85% of respondents want governments to regulate AI for ethical and safe use. In November, North Poll Strategies reported 60% prefer a skeptical, harm-prevention stance; 40% want a pro-growth approach.
When asked what to prioritize, 60% chose legislation to ensure ethical and safe use. Thirty-four percent picked making government more efficient. Twenty-eight percent cited attracting AI research investment, while 24% wanted fewer regulations to compete with the U.S. For context on online polling, see the Canadian Research Insights Council's guidance here.
Another friction point: the government's questionnaire was long, with required open-ended responses. Pollster Alex Kohut warned some respondents likely dropped off-or used AI to complete it-raising the risk of "a robot telling a robot what to do."
Ottawa's Response-and a Shift in Emphasis
Solomon's office says public trust is central and the process was "broad and multi-channel," including an independent task force, targeted roundtables, and ongoing engagement with civil society, labour, researchers, and industry.
Under Prime Minister Mark Carney, policy has tilted from harms and regulation toward economic benefits and adoption. Polling suggests a gap between that emphasis and public sentiment. As Coletto put it, Canadians hold both optimism and anxiety, and they don't want government acting as a cheerleader.
There's also political space here. Anxiety about job impacts could create an opening for parties focused on labour protections.
What This Means for Public Servants
If you work in policy, procurement, service delivery, or oversight, the direction is clear: build trust while enabling practical use. Here's what to move on now.
- Make safety, privacy, and equity non-negotiables: Require risk assessments for generative AI, bias testing, human oversight on high-impact decisions, and red-teaming before deployment.
- Procure with teeth: Ask vendors for model cards, data provenance, energy-use disclosures, incident reporting, and clear opt-out paths for the public where feasible.
- Protect jobs and skills: Run labour impact assessments, work with unions, and set transparent rules on augmentation vs. replacement. Fund reskilling where AI changes roles.
- Do real public engagement: Keep consultations open longer, reduce friction, and avoid relying on AI to summarize public feedback. Publish what you heard and how it changed the policy.
- Prepare for deepfakes and abuse: Coordinate with law enforcement on non-consensual intimate deepfakes, support victims, and pilot content authenticity signals in communications.
- Measure what matters: Define metrics for safety, accuracy, equity, and trust. Share dashboards so people can see progress and problems.
Process Notes and Timelines
The "people's consultation" launched this week, with submissions due by March 15. It invites lived-experience input and open comments on what Canadians want from AI policy.
By contrast, the government's 26-question consultation included only three on safety and public trust. Most focused on research, talent, adoption, commercialization, and scaling the AI industry.
Why This Matters Inside Government
Public confidence is the license to operate. The polling is consistent: Canadians are open to AI if the rules are clear, harms are addressed up front, and accountability isn't outsourced to vendors.
Build the safeguards in policy now. Adoption will follow.
Resources
- On online polling standards: Canadian Research Insights Council CRIC
- For teams building baseline AI literacy by job function: Complete AI Training: Courses by Job
Your membership also unlocks: