Women Face Harsher Judgment for Using AI in Job Applications, Study Finds
Reviewers questioned the trustworthiness and competence of a female job candidate who used AI to write her résumé at twice the rate they did for an identical male candidate, according to research by Zehra Chatoo, founder of the thinktank Code For Good Now.
Chatoo distributed two identical résumés to 1,000 British adults-one attributed to Emily Clarke, one to James Clarke-and told reviewers both were created with AI assistance. Reviewers were 22% more likely to question Emily's trustworthiness. They were twice as likely to doubt her competence.
The feedback revealed the bias plainly. On Emily's résumé: "She can't even write a CV herself-not sure she has the skills to carry out the job." On James's: "He just needed a bit of help putting it together."
Chatoo summarized the pattern: "When men use AI, we question their effort. When women use AI, we question their integrity. That difference changes the perceived risk of using AI."
The Adoption Gap
This perception gap contributes to a broader disparity in AI adoption. Harvard Business School research found women adopt AI at roughly 25% lower rates than men.
Women worry that using AI will make them appear incompetent or dishonest, even when they produce correct results. A Brookings Institute study this year found that 86% of roles with high AI exposure but low capacity to adapt are held by women-meaning many women work in jobs most affected by the technology but feel less able to use it.
A Caltech survey of 3,000 people found women were consistently more skeptical that AI benefits would outweigh risks and less convinced the technology would help their careers.
Gen Z Men Most Critical
Generational attitudes compound the problem. Gen Z men in Chatoo's study were the harshest judges of female AI use. They described Emily's résumé as "weak" at 3.5 times the rate they did James's résumé, which received a 97% approval rating. The same content earned Emily a 76% approval rating.
The implication for HR professionals is direct: if people believe they'll be judged more harshly for using AI, they won't adopt it-regardless of whether they're capable. Closing the adoption gap requires examining not just how people use AI, but how that use gets evaluated.
For teams building hiring processes and performance frameworks, the research suggests bias in AI assessment may be limiting your talent pool. Women avoiding AI tools due to perceived penalties means your organization misses efficiency gains from a significant portion of your workforce.
HR leaders managing AI adoption might consider how evaluation criteria and feedback language differ based on gender. Standardizing how AI use is assessed-rather than treating it as a character question for some employees and a practical shortcut for others-could help close both the adoption gap and the fairness gap.
Learn more about AI for Human Resources or explore the AI Learning Path for CHROs to understand how to implement AI tools equitably across your organization.
Your membership also unlocks: