AI Deepfakes Threaten Hiring Integrity: What HR Needs to Know
Filling job vacancies is already a tough task, but HR leaders now face a new challenge: AI-generated deepfake candidates. As this technology improves, the risk of fake applicants undermining hiring processes is growing.
Dr Toby Murray, Professor of Computing and Information Systems at the University of Melbourne, warns that the technology has advanced enough to deceive people. He highlights examples where deepfake video calls have been used in financial scams, signaling potential for fraud during recruitment.
The Rise of Deepfake Applicants
Research from Gartner predicts that by 2028, one in four job candidates globally could be fake, created using sophisticated generative AI tools. These tools can replicate facial expressions, blinking, and even subtle micromovements, making detection extremely difficult—even for advanced algorithms.
Murray explains how the problem can escalate:
“Candidates might submit AI-written resumes, participate in video interviews assisted by real-time AI, and continue using generative AI once hired. This blurs the lines around authenticity and actual capability.”
Why Current Detection Tools Fall Short
Most deepfake detection tools aren’t foolproof, which puts HR professionals at risk of being caught off guard. Murray emphasizes that the best defense involves educating hiring managers, implementing thorough vetting processes, and maintaining critical human oversight throughout recruitment.
This need for human judgment grows as companies increasingly automate hiring steps with AI—sometimes even using the same AI tech that fraudsters exploit. This creates a “perfect storm” where fake candidates might slip through automated screenings unnoticed.
Lessons From Education and Security
Universities face similar challenges with online exams, ensuring the person on camera is the real student. HR can learn from these approaches to identity verification during remote assessments.
The risks extend beyond just hiring the wrong person. Fraudulent hires might seek access to sensitive company data or trade secrets rather than the salary. Verifying identity and intent becomes crucial to protect organizational assets.
Preparing for the Future
Murray stresses that HR must act now by raising awareness and developing policies to address misuse of AI in hiring. Without clear data on the scale and methods of abuse, targeted responses remain difficult.
Proactive education, strong vetting, and retaining human involvement in recruitment decisions are key to maintaining hiring integrity as AI technology evolves.
For HR professionals interested in strengthening their AI knowledge and skills, exploring up-to-date AI courses can provide practical insights on managing AI risks in recruitment.
Your membership also unlocks: