AI helps Canada's CIPHER catch and debunk foreign disinformation - with Russia now, China and the U.S. next

Canada is using AI with human fact-checkers to spot and debunk foreign narratives faster through CIPHER. It's catching Russian claims now, with Chinese and U.S. content next.

Categorized in: AI News Science and Research
Published on: Feb 15, 2026
AI helps Canada's CIPHER catch and debunk foreign disinformation - with Russia now, China and the U.S. next

AI-augmented fact-checking is accelerating Canada's response to disinformation

Researchers in Regina say artificial intelligence is helping Canada keep pace with a steady flow of online falsehoods built to divide the public and distort reality. By integrating AI into CIPHER, a debunking tool from the Canadian Institute for Advanced Research (CIFAR), teams can spot suspect claims faster while keeping human judgment at the center.

CIPHER scans foreign media sites for dubious narratives linked to Canada. A human fact-checker then reviews each claim before any public call is made.

What CIPHER is seeing right now

Brian McQuinn, an associate professor at the University of Regina and one of the project's leads, said the system currently analyzes Russian campaigns, with work underway to decode content in Chinese languages. He added it could also examine narratives originating in the United States.

One recent catch: a Russian outlet claimed Alberta is moving toward independence. That's false. While separatists have held events and reportedly spoke with U.S. officials, there is no formal process underway for Alberta to separate. "Effective disinformation often has kernels of truth in it," McQuinn said.

CIPHER launched three years ago following Research that found pro-Kremlin social media accounts targeted far-right and far-left groups in Canada with false narratives about the war in Ukraine. These included baseless claims that Russia invaded to root out a neo-Nazi regime and that Ukraine was seeking nuclear weapons.

McQuinn said the overarching goal of these campaigns is to tear societies apart and incite violence-and they spread when regular people pass them along. "(Campaigns) will use events in the news and tailor stories to advance it in different ways," he said, adding that China and Russia seek to make the West look like it's decaying economically, politically, and socially.

He also noted the United States is increasingly becoming a main source of disinformation in Canada, amplified by the fact that most Canadian Social Media activity sits on U.S. platforms. "We have seen that Canadian news and certain types of Canadian content are being downgraded and throttled within these algorithms," he said.

AI is part of the problem-and part of the fix

AI is behind much of the junk content populating feeds. Yet it's also key to scaling verification. "We are in an AI arms race around disinformation," McQuinn said. CIPHER uses AI to triage volume, then relies on human fact-checkers to decide what's real and what isn't.

How CIPHER works (plain and simple)

  • Scan: Monitor foreign media for claims that touch Canadian people, policies, or public opinion.
  • Flag: Use AI to surface suspect or fast-spreading narratives for review.
  • Verify: Human fact-checkers assess claims against primary sources before publishing debunks.
  • Expand: Current focus is Russian content; Chinese-language analysis is in progress. U.S.-sourced narratives may be assessed next.

Policy and platform angle

CIPHER is already in use at DisinfoWatch, a Canadian debunking organization. Its founder, Marcus Kolga, is calling for stronger legislation and rules for digital platforms to curb the spread of lies: "Us doing it alone is not sufficient enough. It requires technology and for us to sort of make up that gap that we have."

CIFAR has received funding from the federal and Alberta governments. McQuinn said he has spoken with government agencies about deploying CIPHER more widely.

Practical takeaways for science and research teams

  • Assume cross-border influence. Track narratives in multiple languages, not just English or French.
  • Adopt the "10-second pause" in team comms. Before sharing, confirm the source, date, and whether the claim names a verifiable process (e.g., legislation, court filings, official statements).
  • Watch for truth-adjacent claims. Disinformation often anchors itself to a partial fact, then leaps to a false conclusion.
  • Build a source-of-record log. Maintain links to primary documents so staff can verify quickly.
  • Keep humans in the loop. Use AI for triage and clustering; reserve final judgment for trained reviewers.
  • Track algorithm shifts. If Canadian content visibility drops, compensate with owned channels and direct subscriber updates.
  • Partner with external debunkers. Share signals and speed up response cycles.

Further reading

If your team is formalizing AI literacy and verification workflows, you may find structured training helpful. See our latest AI courses for options that support policy, data, and research teams.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)