AI Fraud, Deepfakes, and Fake Resumes: How Employers Are Fighting Back Against Deceptive Jobseekers

Fraudulent jobseekers using AI tools like deepfakes are increasing, prompting two-thirds of hiring managers to require live-only interviews. Manual detection often falls short, urging combined human and software checks.

Categorized in: AI News Human Resources
Published on: Sep 03, 2025
AI Fraud, Deepfakes, and Fake Resumes: How Employers Are Fighting Back Against Deceptive Jobseekers

Fraud involving deceptive jobseekers is on the rise, prompting HR professionals to rethink hiring strategies. A recent survey found that two-thirds of hiring managers support mandatory live-only interviews as a way to verify candidate identities. This response is driven by increasing reports of jobseekers using AI tools like deepfakes to misrepresent themselves during recruitment through images, videos, or audio.

One survey revealed that 17% of U.S. hiring managers have encountered candidates using deepfake technology to alter video interviews. This trend raises significant concerns about the authenticity of candidates and the effectiveness of traditional verification methods.

Growing Sophistication in Deceptive Tools

Deepfake technology, once limited to expert users, has become widely accessible. Cybersecurity experts note that tools like ChatGPT and other AI platforms have lowered the barrier for creating convincing fake content. What used to require technical expertise and resources is now achievable by a broad audience, increasing the risk of fraud.

Beyond deepfakes, hiring professionals report seeing AI-generated résumés (72%), fake portfolios (51%), fake references (42%), fake credentials (39%), voice filters (17%), and face-swapping in video interviews (15%). The sophistication of these tools means that relying on phone calls or emails for identity verification is no longer sufficient.

Overconfidence in Human Detection

Despite widespread fraud, many hiring professionals remain confident they can spot AI-generated content without specialized software. Three in four believe they can identify fake content manually, and two-thirds regularly review credentials by hand. However, only 31% use AI-powered detection software.

Experts warn this confidence is misplaced. Human bias and the emotional connection formed during interviews can blind recruiters to red flags. Additionally, deepfake technology is advancing faster than most people’s ability to detect it, making manual detection unreliable.

Combining Human Judgment with Detection Software

Detection software has improved but is not foolproof. Live interviews still play a crucial role, especially when interviewers include personal, unexpected questions or interrupt responses. These tactics make it harder for bots or deepfake videos to maintain a natural flow.

Detection tools offer an additional layer of security, but experts emphasize that these technologies and techniques need to be used together. Open reporting policies and regular in-person meetings with remote employees can deter fraud attempts by signaling that identity verification is taken seriously.

Training Challenges

Only 37% of hiring professionals have received training on AI-related hiring fraud. The rapid pace of AI development means training quickly becomes outdated unless it is continuously refreshed. Hands-on, scenario-based training with real examples is the most effective way to help recruiters recognize deepfake signs and other AI deception tactics.

Live Interviews and Stronger Credential Checks

Many organizations are revisiting their hiring processes. More than half of hiring professionals support stricter credential verification, and live interviews are becoming mandatory even for remote roles. For example, some companies require candidates to meet multiple team members face-to-face to confirm authenticity and cultural fit.

For sensitive or critical roles, in-person interviews and thorough credential checks are essential. Verifying transcripts, contacting educational institutions, and scrutinizing references can uncover misrepresentations that AI tools might otherwise hide.

Red Flags to Watch For

  • Visual indicators: Unnatural facial expressions, inconsistent skin texture, flickering or blurring around facial edges, awkward lip movements, or unusual eye contact.
  • Audio indicators: Robotic or monotone speech, unnatural intonation, lagged responses, or overly consistent ambient sound with no background noise.
  • Contextual cues: Odd phrasing, grammatical errors, sudden tone shifts, or media from unverified sources.
  • Validation strategies: Use real-time challenges, pre-agreed codewords, or time-sensitive verification questions to confirm identity.

HR’s Role in Combating AI Fraud

Addressing AI-driven hiring fraud requires an organization-wide effort. Leadership must prioritize the issue, IT teams should provide expertise and tools, and HR must coordinate training and enforce consequences for fraud. Collaboration ensures that everyone understands the risks and is prepared to act.

Cybersecurity and compliance teams can drive awareness and training, but HR facilitates conversations and applies policies. This multidisciplinary approach strengthens defenses against sophisticated fraud attempts.

For HR professionals interested in enhancing their skills on AI and fraud detection, exploring targeted AI courses can provide valuable insights. Resources like Complete AI Training’s courses for HR professionals offer practical guidance on understanding AI's impact in recruitment.