Most Canadians Want AI Regulation Amid Deepfake Concerns and Trust Issues, Poll Shows
A Leger poll shows 85% of Canadians support AI regulation to ensure safety and ethics. Concerns include deepfakes and misinformation, highlighting the need for clear policies.

Canadians Call for Strong AI Regulation Amid Growing Usage
As artificial intelligence becomes more integrated into daily life across Canada, most Canadians want governments to step in with clear regulations. A recent Leger poll surveying 1,518 Canadians found that 85% support regulating AI tools to ensure ethical and safe use. Of those, 57% expressed strong support for regulation.
Opinions on AI’s impact vary. About one-third of respondents see AI as beneficial, another third view it as harmful, and the rest remain uncertain. This split highlights the nuanced public attitude towards AI, with trust levels differing based on context and application.
AI’s Role in Workplaces and Trust Levels
AI is increasingly used in classrooms, healthcare, and government offices. More than half of Canadians who use AI at work say it has boosted their productivity. Younger generations, especially Gen Z, report higher productivity gains compared to Gen X and Baby Boomers.
Trust in AI depends heavily on its use case. While 64% trust AI for simple household tasks or educational support, only 36% trust it for health advice and 31% for legal matters. Replacing teachers with AI finds very little support, at just 18%.
Concerns Over Deepfakes and Misinformation
Concerns about AI misuse, especially deepfakes, are significant. Deepfakes—manipulated videos or audio impersonating real people—have been used in Canada to spread false endorsements by politicians and celebrities. Saskatchewan Premier Scott Moe’s government is actively seeking the creators of such videos involving prominent figures.
The Canadian Centre for Cyber Security has also warned about AI-generated voice and text scams targeting citizens by impersonating officials to steal money or information. These risks highlight the need for clear regulations and safeguards.
AI as a Tool, Not a Threat
Despite concerns, AI still offers valuable support. Researchers like Steve DiPaola from Simon Fraser University use AI assistants to explore ethical questions and prepare students for AI’s workplace role. Such tools demonstrate AI’s potential when applied thoughtfully and responsibly.
Government’s Approach to AI Regulation
While public demand for regulation is clear, the federal government signals a careful approach. AI Minister Evan Solomon indicated a shift away from heavy warnings toward policies that also support economic benefits from AI. His office reiterated their commitment to responsible AI use, investing in secure infrastructure and frameworks to identify risks early.
They also emphasize ongoing engagement with Canadians and industry to address safety, bias, and privacy concerns. More detailed plans are expected when Parliament resumes in September.
What This Means for Government Employees
For those working in government, understanding public sentiment and the practical challenges of AI regulation is crucial. Balancing innovation with safety requires informed policies that reflect both caution and opportunity.
Learning more about AI tools and ethical frameworks can help government professionals contribute effectively to this evolving landscape. Resources such as Complete AI Training offer courses that can deepen knowledge on AI applications and responsible use.
- Key takeaways for government roles:
- Support and advocate for clear AI regulations aligned with public concerns.
- Promote transparency and ethical AI deployment within public services.
- Stay informed about emerging AI risks like deepfakes and misinformation.
- Encourage education and training on AI’s capabilities and limitations.
As AI continues to expand across sectors, government employees play a vital role in shaping policies that protect citizens while fostering innovation.