4,000 complaints a day: Canada pilots a safer, fairer AI chatbot that doesn't ask for your data

CDS is testing a GPT-4 chatbot to guide people to Canada.ca pages, with strict privacy and community checks. Promising, but scale, errors, and cost could keep it in beta.

Categorized in: AI News Healthcare
Published on: Nov 21, 2025
4,000 complaints a day: Canada pilots a safer, fairer AI chatbot that doesn't ask for your data

What Public Servants Should Know About the Government of Canada's AI Chatbot Beta

The Government of Canada fields up to 4,000 website complaints per day. With more than 10 million webpages, finding the right answer is hard-for the public and for front-line teams who pick up the slack.

An AI chatbot, built by the Canadian Digital Service (CDS) and powered by GPT-4, is being tested to reduce that friction. The goal: let people ask plain questions, get accurate summaries, and jump straight to the right government page-while reminding them to verify the response.

How the prototype works (and what it won't do)

There's no account requirement. The tool also rejects personal information in queries (SINs, phone numbers, etc.). That's a deliberate choice: keep interactions anonymous and depersonalized until a person is ready to identify themselves in a formal process, like an application.

As Michael Karlin, acting director of policy at CDS, put it: "If you don't need personal information, don't collect personal information."

Privacy and security by restraint

Karlin's team avoided collecting large equity datasets (e.g., gender or race of testers) to reduce long-term risk. His warning was blunt: "The dataset you collect now may become a weapon in the not-too-distant future."

The posture is clear: minimize data, reduce attack surface, and cut the incentive for misuse. For public servants, that's a reminder that safety often comes from what you never store.

Equity: a scalpel, not a chainsaw

AI can amplify biases if left unchecked. Hammed Afenifere, CEO of Oneremit, highlighted how models skew toward regions with richer datasets-useful for the U.S., Canada, or the U.K., but thin for African markets. We've all seen versions of this, like devices that fail on darker skin.

CDS is working to ensure people get responses that reflect their context-for example, programs for Black business owners-without building bias into the answer. Karlin called it "a scalpel and not a chainsaw-based process."

Testing is moving outward in "bubbles": first with government employee communities (e.g., Black and LGBTQ+ networks), then with community groups. Indigenous communities will be engaged with the understanding that perspectives on AI are diverse and won't fit a single template.

Crucially, CDS wants test questions to come from the people who actually use these services. A transgender person may interact with government in specific ways-so those scenarios should originate from that community, not assumptions on a dev team.

Who defines "responsible" AI?

The Ottawa Responsible AI Summit surfaced a recurring theme: who gets to decide what "responsible" means? Some called for standards and committees. Others stressed representation and proximity to real users.

Karlin's take skews practical: bring the "table" to communities that will use the tool instead of debating who gets invited. Start small, learn fast, expand testing as trust grows.

As MP Jenna Sudds noted, responsible AI means benefits must reach everyone-and reflect Canada's diversity and values.

Pilot results, risks, and scale

The chatbot just finished a trial with 2,700 random users, with roughly a 95% success rate. A larger test with 3,500 people is planned next year.

Two constraints stand out. First, scale: the system must handle millions of queries. Second, risk: bad answers can cause real harm. Remember Air Canada's liability when its chatbot gave incorrect advice.

There's also the question of cost. Karlin was frank that the project may never leave beta. Before full launch, the team needs to be confident the service is safe, helpful, and worth taxpayer dollars.

Practical steps for government teams building AI help tools

  • Start small, measure hard: run targeted pilots, publish success criteria, track failure modes.
  • Block PII at the input layer: redact or reject sensitive data before it reaches the model.
  • Minimize data retention: log only what's essential for safety and improvement, and set short retention windows.
  • Use clear disclaimers: prompt users to verify answers and provide the official source link every time.
  • Co-design with communities: source test scenarios from the people who use the services (e.g., Black entrepreneurs, trans users, newcomers).
  • Test for bias with a "scalpel" approach: create targeted test sets for specific populations and programs.
  • Model fit over hype: evaluate multiple models for accuracy, cost, and policy controls; switch if fit isn't there.
  • Guardrails and escalation: define topics the bot won't answer, and create handoffs to human support.
  • Incident playbook: document how you'll respond to harmful or incorrect outputs-fast corrections, user notifications, and model updates.
  • Procurement and cost modeling: forecast usage, negotiate predictable pricing, and set thresholds for pausing or scaling.
  • Governance alignment: align with internal privacy, security, and accessibility standards; conduct regular audits.

Why this matters for public service

Done right, AI help tools can reduce call volumes, shorten time-to-answer, and improve access-especially for people with complex needs who can't easily visit offices. Done poorly, they create risk, confusion, and cost.

The CDS beta shows a path worth studying: keep data light, test with the right people, and be ready to shut it down if it can't meet public-interest standards.

Learn more: Explore the Canadian Digital Service's work at digital.canada.ca.

If your team is building internal AI skills for service delivery, see curated options by role at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide