Building Public Trust in Government AI: Lessons from Canada’s Approach
Governments must build public trust in AI through transparency, inclusive consultation, and open communication. Honest sharing of challenges helps foster confidence and improve services.

Trusting the Process, Trusting the Product: How Governments Can Win Over the Public on AI
Trust in artificial intelligence (AI) might not have been a headline issue in Canada’s recent federal election, but it remains a critical challenge shared by governments globally. The Canadian federal public service is actively preparing for a future where AI—both generative and agentic—forms the foundation of many digital public services. Their approach includes crafting an AI strategy that reflects Canada’s democratic values, focusing on human rights, public trust, and national security.
However, trust must be earned. A recent discussion featuring voices from within the Canadian public service explored how to build that trust both internally and with the public.
Building Trust in AI Within Government
Canada’s AI strategy took over a year to develop, relying heavily on extensive consultations. Jonathan Macdonald, director of responsible data and AI at the Treasury Board of Canada Secretariat, highlighted the importance of roundtables involving academics, industry, civil society, and indigenous groups. This broad engagement laid a collaborative foundation for the strategy.
Steve Rennie from Agriculture and Agri-Food Canada shared his team's experience in building trust through a generative AI chatbot that offers conversational information about government agricultural programs. Winning the Public Service Data Challenge in 2023 validated their approach. But the success hinged on consistent communication—engaging stakeholders early and often to understand concerns and address them collaboratively.
Rennie emphasized that the biggest goal in trust-building is breaking down barriers to understanding AI. Accessibility matters. People prefer different ways to receive information—short summaries, bullet points, or even rhymes. AI can tailor content to individual preferences, making public services more friendly and trustworthy.
Winning Public Trust
Dr Saeid Molladavoudi, director of the Centre for AI Research and Excellence at Statistics Canada, stressed that while AI offers vast opportunities, it carries risks that governments must manage responsibly. Deploying AI efficiently and ethically involves mitigating potential harms through increased AI literacy and capacity.
He identified three crucial questions for adopting AI technology:
- Does the technology work effectively?
- Is it legally and ethically sound?
- Do people understand and trust it?
Without public trust, even the best technology is ineffective. To build this trust, Molladavoudi proposed a public AI registry listing all government AI projects. This registry would be open for public consultation and scrutiny, complete with contact information for engagement.
Trusting the Process and Embracing Risk
When asked whether public servants feel free to take risks in AI development, Macdonald acknowledged a “fear of the unknown” within government. There is hesitation to launch new AI initiatives in an untested environment where mistakes might have consequences.
One way to overcome this is by engaging directly with users affected by AI systems. Macdonald calls this “working in the open and failing forward.” Sharing setbacks openly can build trust and reduce fear. He noted that plans often don't go exactly as expected, so honest conversations about challenges help temper anxieties and encourage courageous innovation.
Prioritizing People Over Technology
Public trust in government is harder to secure than in private firms. Governments must maintain legitimacy with citizens, even as ruling parties change, making accountability essential. Unlike private companies driven by profit, governments serve different mandates and must prioritize public interest.
“We are talking about humans,” Macdonald emphasized. AI development comes with competing pressures: fast technological advances and high expectations for improved services. He reminded that trust in government is already low, so government teams must work harder to build and maintain it, knowing it can be lost quickly.
Rennie concluded that clear and honest communication about AI goals is key to building trust. It’s not about perfection but about sharing lessons learned and showing a willingness to improve. This transparency reflects a capacity to adapt and ultimately deliver better public services.
The ideas shared in this discussion highlight practical steps governments can take to earn trust in AI—from inclusive consultation and accessible design to openness about risks and results.