Imper.ai raises $28M to verify who's real on every call, chat, and ticket

Imper.ai emerges from stealth with $28M to verify who's really on the line before cash or creds move. Agentless, metadata-first checks span video, chat, phone, and help desk.

Published on: Dec 05, 2025
Imper.ai raises $28M to verify who's real on every call, chat, and ticket

Updated 18:25 EST / December 04, 2025

Imper.ai launches with $28M to stop AI-powered impersonation at the source

Impersonation is crossing from nuisance to core business risk. Imper.ai just exited stealth with $28 million in funding to help companies verify who's actually on the other end of a call, chat, or ticket-before money moves or credentials get handed over.

Founded in 2024, the company verifies identity across video calls, chat apps, phone calls, and IT help desk interactions. Instead of scraping conversation content, it reads hard-to-forge metadata-device telemetry, network diagnostics, behavioral signals, and organizational context-to confirm authenticity in real time.

What makes Imper.ai different

  • Agentless deployment: No plugins. No browser extensions. No workflow changes for end users.
  • Metadata-first verification: Leans on digital "breadcrumbs" attackers struggle to fake-even with advanced deepfakes and voice clones.
  • Privacy-first posture: Works across Zoom, Microsoft Teams, Slack, WhatsApp, Google Workspace, and help-desk systems without scanning conversation content.
  • Channel coverage: Video, chat, phone, and IT support-where social engineering typically lands.

According to Imper.ai's co-founder and CEO Noam Awadish, AI-driven impersonation is now a major driver of financial loss and reputational risk. Gartner expects that by 2027, half of enterprises will invest in anti-deepfake and disinformation-security tools-evidence that prevention is becoming a must-have, not a nice-to-have.

Why this matters for security, IT, and product

  • Security leaders (CISOs, IR, SOC): Real-time verification helps stop executive voice scams, vendor payment fraud, MFA reset abuse, and ticket-takeover attempts before they escalate.
  • IT and help desk: Reduces reliance on weak identity proofs during password resets or access requests. Less friction, fewer manual checks.
  • Product and engineering: Low-latency checks that don't touch message content simplify integration and limit data exposure. Useful for building trust into collaboration features or support flows.

Where it fits in your stack

  • Upstream of comms and support tools: Sits alongside Zoom, Teams, Slack, WhatsApp, and help-desk platforms to verify participants.
  • Downstream of identity: Complements SSO/MFA by validating that the person using the session is the real human, not a cloned voice or deepfake.
  • Feeds the SOC: Export events to SIEM/SOAR for correlation with BEC, payroll change, or vendor fraud alerts.

Questions to ask during evaluation

  • What signals power verification (device, network, behavioral, org context), and how are they protected?
  • What's the false positive/negative rate by channel (voice, video, chat, help desk)?
  • Latency: How fast is verification during live calls or ticket flows?
  • What's the privacy model? Any content inspection? Data retention and regional controls?
  • Integrations: SSO/SCIM, SIEM/SOAR, ticketing, and collaboration platforms?
  • Coverage: Internal + external participants, contractors, and vendors?
  • Auditability: Can you export evidence for investigations and compliance?
  • Bypass resistance: Results of red-team tests against deepfakes and voice clones?

30-60-90 day rollout (practical starting point)

  • Days 1-30: Pilot with help desk and finance approvals. Gate MFA resets, wire updates, and executive comms. Baseline incident volume and verification success.
  • Days 31-60: Expand to high-risk teams (AP, payroll, procurement, IT admins). Integrate with SIEM and SOAR. Tune policies for risk-based prompts.
  • Days 61-90: Extend to company-wide calls and external vendor interactions. Formalize playbooks. Set SLAs and KPIs with Security, IT, and Finance.

Measurable outcomes to track

  • Reduction in impersonation-related incidents and payouts
  • Verification coverage across critical workflows (help desk, finance approvals, executive comms)
  • Time-to-verify and user opt-out rate
  • Correlation with fewer manual call-backs and secondary checks

Funding and market signal

Redpoint Ventures and Battery Ventures led the $28M round, joined by Maple VC, Vessy VC, and Cerca Partners. Imper.ai is already in use across finance, healthcare, and tech-sectors hit hard by social engineering and deepfake-enhanced fraud.

As Battery Ventures partner Barak Schoster put it, impersonation isn't a side note in modern attacks-it's the main door. Verification at the moment of human trust is becoming a new control layer for enterprises.

Context: the threat is real

Law enforcement continues to highlight losses tied to impersonation and business email compromise. See the FBI's latest IC3 report for scale and patterns: 2023 IC3 Internet Crime Report. For practical defense guidance on synthetic media, CISA's resource hub is a solid reference: CISA: Synthetic Media (Deepfakes).

Bottom line

Deepfakes and voice clones turn trust into an attack surface. Imper.ai's agentless, metadata-first verification gives teams a way to confirm identity without slowing work or exposing content. If you move money, reset access, or communicate at scale, this belongs on your shortlist.

If you're building internal skills around AI risk, policy, and practical defenses, explore relevant learning paths here: Complete AI Training - Popular Certifications.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide