Deepfakes, voice clones, and vanishing profits: Canada's AI fraud wake-up call

AI fraud is surging in HR: 72% were hit; deepfakes, voice clones, and slick phishing target pay and hiring. A 90-day playbook helps teams verify fast and stop costly changes.

Categorized in: AI News Human Resources
Published on: Mar 06, 2026
Deepfakes, voice clones, and vanishing profits: Canada's AI fraud wake-up call

AI fraud is getting faster and harder to spot: HR's 90-day playbook

AI is helping HR deliver better work. It's also arming scammers with speed, scale and realism we didn't have to deal with two years ago.

New surveys from KPMG Canada and RBC show the hit is real. Seventy-two percent of Canadian businesses lost 1%-5% of annual profits to AI-enabled fraud in the past year. Among those hit, 81% say the incident involved AI, and 72% were targeted more than once. Employees feel it too: 83% of Canadians now assume unexpected texts, emails or calls are fake until proven real.

What the data says

  • Attack volume and severity: 94% of leaders expect AI-powered attacks to target their organisation this year, yet only 26% have a tested incident response plan that addresses deepfakes and voice clones.
  • Top attack types: AI-generated phishing emails or chats (60%), deepfake documents (39%), and voice-clone executive impersonation calls (24%).
  • Consumer headspace: 81% say there's "a new scam every week," 87% struggle to tell if an ad is real, and 75% find it harder to judge if a website is legit.
  • Confidence gap: 39% feel confident spotting AI scams today, while 68% believe AI may soon make scams impossible to detect.
  • Behavioural risk: 41% have clicked a bad link before realising, and 40% have spoken to a fraudster on the phone.

Why HR can't sit this out

HR is a high-value target. Payroll, benefits, identity data, job postings, and hiring workflows are perfect for social engineering and deepfakes.

Employment scams spiked as criminals used AI to impersonate companies, recruiters, and candidates. Advisory firm Gartner warned that by 2028, one in four job candidates could be fake. That hits your employer brand, wastes recruiter time, and can push bad hires into the business.

Attack patterns to expect in HR

  • Voice-clone "executive" calls pushing urgent payroll or vendor changes.
  • AI-crafted recruiter or HR emails/chats asking for onboarding data, tax forms, or MFA codes.
  • Deepfake documents (IDs, bank letters, diplomas) that look flawless on a quick scan.
  • Fake candidates using AI to pass interviews, borrowed portfolios, or real-time voice filters.
  • Imposter job postings and fake "assessment fees" using your logo to trick applicants.

The 30-60-90 day HR playbook

Days 0-30: Stop the easy wins for attackers

  • Verification rules: No payroll, benefits, or banking changes via chat or email. Require out-of-band verification (call back a known number on file) and a second approver for sensitive requests.
  • Executive approvals: Use a pre-agreed code phrase for urgent phone requests. No exceptions.
  • All-hands micro-briefing (20 minutes): Show examples of AI phishing, deepfake docs, and voice clones. Share a simple "pause-verify" checklist.
  • Careers page notice: Add a clear scam warning: where you post jobs, how you contact candidates, and that you never ask for payment or personal data over messaging apps.
  • MFA everywhere HR lives: HRIS, ATS, benefits portals, payroll, and file storage. Turn on transaction alerts where available.

Days 31-60: Build muscle memory

  • Training cadence: Move to quarterly, scenario-based refreshers. Use short clips of real phishing and deepfakes. KPMG notes 81% of companies already train every 6-12 months-tighten that cycle.
  • Phishing simulations: Add AI-style pretexts (personalised details pulled from LinkedIn or recent company news).
  • Candidate verification: Live video with liveness checks, gov-ID verification, proctored skills tests, and reference calls to corporate lines. No interviews over personal messaging apps.
  • Policy updates: Codify "rule of two" approvals, no credential sharing, and a one-click reporting path for suspicious messages.
  • IR playbook for HR: Who to call, what to freeze (payroll changes, accounts), what to preserve (emails, call logs, documents), and how to message staff and applicants.

Days 61-90: Modernise and measure

  • Continuous controls: Move away from point-in-time checks. Add ongoing, risk-based reviews for payroll edits, vendor onboarding, and high-risk changes.
  • "Fight AI with AI" (with IT): Invest in tools that flag anomalies, detect manipulated media, and verify identity signals. Over half (52%) of companies are doing this; make sure HR use-cases are covered.
  • Tabletop exercise: Run a deepfake CEO payroll scam scenario with HR, Finance, IT, and Comms. Time the response. Patch the gaps.
  • Metrics: Track report rate per 100 employees, time-to-verify sensitive requests, number of blocked changes, and training completion. Tie budget to risk reduction, not just tool count.

RBC's practical tips to share with employees

  • Pause when emotions are triggered-urgency and fear are the hook.
  • Verify unexpected requests through trusted channels (call back known numbers, not what's in the message).
  • Watch for personalised scams using public info (job titles, recent posts, org charts).
  • Use strong passwords, multi-factor authentication, and transaction alerts.

Recruiting: keep quality high and fraud low

  • Identity and presence: Liveness checks on video, no voice-only final interviews, and device camera on for assessments.
  • Work samples with attestation: Proctored tasks or pair-sessions reduce outsourced/AI-generated submissions.
  • References that matter: Call corporate switchboards or verified numbers; avoid personal mobiles sent in email.
  • Documentation: Ask for original formats where possible; scrutinise fonts, metadata, and inconsistent stamps in "bank letters" and IDs.

Incident response for HR: the first 60 minutes

  • Isolate and preserve: Don't delete messages. Save headers, numbers, recordings, and documents.
  • Freeze sensitive changes: Payroll edits, banking updates, benefit beneficiaries, and vendor adds.
  • Escalate fast: Notify Security/IT, Finance, Legal, and Comms. Start an event ticket.
  • Outreach: If candidates or employees were targeted, send a short, clear notice with what to do next and how you'll follow up.

Culture beats tools (but use both)

Six in ten leaders plan to raise fraud budgets by up to 7% this year, prioritising detection tech, training, and transaction controls. Good move-but tools don't fix decision habits.

Seven in ten U.S. managers saw employees make AI-related mistakes last year, and some errors cost more than $50,000. That's a training and process problem. Coach people to slow down, verify, and escalate early.

Quick HR checklist

  • Two-person approval and call-back for any pay, bank, or vendor change.
  • Quarterly micro-training with AI-style simulations and live deepfake examples.
  • Clear careers page warning and a single reporting channel for scam sightings.
  • Candidate identity checks, proctored assessments, and verified references.
  • Tested HR incident playbook and a quarterly tabletop with Finance and IT.
  • Continuous, risk-based monitoring of high-impact transactions.

Where to upskill your team next

The threats are getting smarter and faster. Your edge is a team that knows how to pause, verify, and make the next right decision-every time.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)