AI sovereignty isn't enough: Solomon presses for adoption across Canada's economy

Evan Solomon says adoption matters as much as sovereignty. Ship real deployments that speed services, protect data, and scale safely with clear metrics and oversight.

Categorized in: AI News Government
Published on: Nov 04, 2025
AI sovereignty isn't enough: Solomon presses for adoption across Canada's economy

AI adoption is as important as AI sovereignty, Solomon says

Speaking in Toronto, Canada's AI and Digital Innovation Minister Evan Solomon made a clear point: building a strong AI sector matters, but getting people across the economy to actually use the tools matters just as much. That applies across government, too-where value shows up in faster services, fewer backlogs, and better policy execution.

If you work in government, the mandate is simple: move beyond pilots that never scale, and put AI to work where it improves outcomes, safely and measurably.

Why this matters for government right now

Canada can't claim leadership if AI sits on a shelf. Departments need practical deployments that speed up service delivery, strengthen compliance, and free staff for higher-value work.

That means focusing adoption on real workloads-benefits processing, contact centres, grant screening, inspections, policy research, and data analysis-while staying inside strict guardrails.

Sovereignty still counts

Sovereignty isn't just a talking point. It's data stewardship, procurement leverage, and the ability to audit and switch vendors. Think data residency, secure access controls, transparent models, and clear exit options.

Balance is the goal: keep sensitive work on trusted platforms while using proven commercial tools for low-risk tasks.

What your department can do this quarter

  • Pick three high-volume tasks that slow your team down (e.g., document summarization, email triage, form validation).
  • Run 60-90 day pilots with clear success metrics: time saved per case, error rates, and citizen satisfaction.
  • Stand up a lightweight risk review covering data sensitivity, model transparency, human oversight, and red-teaming.
  • Set a procurement path (call-ups, sandbox agreements, or challenge-based buys) so wins can scale without starting over.
  • Train the front line on prompts, data handling, and quality checks. Adoption dies if staff aren't confident.
  • Measure and publish results-even the misses. Openness builds trust with unions, privacy teams, and the public.

Guardrails you can use today

Start with existing standards rather than inventing your own. The Government of Canada's Directive on Automated Decision-Making sets expectations for impact assessments, testing, and human oversight.

Pair it with the NIST AI Risk Management Framework to structure risk identification, measurement, and monitoring across the AI lifecycle.

Procurement and funding levers

  • Create a "fast lane" for low-risk tools (no sensitive data, reversible outputs) with pre-approved terms and time-boxed security reviews.
  • Use outcome-based language in RFPs: target accuracy, latency, auditability, and portability-avoid vague feature lists.
  • Budget for scaling early (integration, change management, and training usually cost more than licenses).
  • Pool demand with other departments to lower price and secure transparency commitments.

Data readiness, the honest check

AI fails when data is messy, siloed, or over-classified. Start with a quick inventory: where is the data, who owns it, what's the sensitivity, and how often is it updated?

  • Segment by risk: public, internal, protected, secret. Match use cases to the lowest-risk data that still delivers value.
  • Set retention and audit rules before tools go live to avoid rework and investigations later.

Ethics, privacy, and human oversight

  • Humans stay in charge for any decision affecting eligibility, enforcement, or rights.
  • Keep a paper trail: prompts, model versions, datasets, and change logs.
  • Test for bias using representative samples and publish remediation steps.
  • Offer a clear appeal path whenever AI contributes to a decision.

People and skills

Policy doesn't implement itself. Upskill the teams who will actually use these tools-analysts, program officers, call centre agents, and inspectors.

  • Micro-learning on prompts, verification, and privacy basics.
  • Role-based playbooks with example prompts, red flags, and escalation rules.
  • Office hours and a "help channel" for quick troubleshooting.

If you need structured training for specific roles, see curated options here: AI courses by job.

A 90-day adoption plan

  • Weeks 1-2: Pick use cases, confirm guardrails, draft success metrics.
  • Weeks 3-6: Pilot with a small team. Track time saved, error rates, and user feedback.
  • Weeks 7-8: Security and privacy review, procurement path set, training materials finalized.
  • Weeks 9-12: Scale to adjacent teams, publish results, and lock in maintenance and monitoring.

Common pitfalls to avoid

  • Endless strategy without shipping anything.
  • Pilots that can't scale because procurement wasn't planned.
  • Skipping the data audit and blaming the tool later.
  • Training the wrong audience-leaders are briefed, users are left guessing.

The takeaway

Sovereignty and adoption rise together. Keep data safe, insist on transparency, and move quickly on practical use cases with clear metrics.

This is how government turns AI from headline material into better services that people actually feel.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)