AI Advice Sways Hiring Choices, Even When It's Biased

New research finds recruiters mirror biased AI picks, rather than correcting them. Human-in-the-loop isn't enough; set override rules, blind screens, audits, and give people time.

Categorized in: AI News Human Resources
Published on: Nov 28, 2025
AI Advice Sways Hiring Choices, Even When It's Biased

AI Isn't a Hiring Shortcut: New Research Shows Humans Mirror Model Bias

AI is moving deeper into recruiting while many companies trim HR headcount. A new University of Washington study signals a clear risk: when people collaborate with biased AI, they tend to mirror that bias instead of correcting it.

That should stop every TA and HR leader in their tracks. "A lot of regulations and recommendations… say that you should be using human collaboration… The findings show that's not really effective," said Kyra Wilson, the study's lead researcher.

What the researchers did

More than 520 participants reviewed resumes that large language models had already scanned. The models were intentionally seeded with different levels of racial bias. Participants had four minutes to review each set and pick the top three candidates.

Candidates were equally qualified across 16 job categories (from housekeeper to nurse to systems analyst). Names and resume entries could signal race (for example, involvement in identity-based affinity groups). A clearly unqualified "distractor" candidate was added to obscure the study's purpose.

What they found

  • Without AI, or with a "neutral" AI, participants picked White and non-White candidates at similar rates.
  • With biased AI, participants mirrored the model's preferences. If the AI leaned White, so did the humans. If it leaned non-White, humans followed that too.
  • With the most biased systems, humans were slightly less biased, but still went along with AI picks roughly 90% of the time.

This isn't a small effect. It compounds across dozens or hundreds of roles. "Bias can sometimes be hard to see in these systems," Wilson said. "You don't necessarily see how that will have broader effects when more decisions are stacked together."

Why this matters for HR

Some brands are rolling out AI while cutting the HR teams that usually run structured, fair processes. That's a risky combo. As Herman Aguinis put it, AI is like a power tool: in expert hands it's precise; in novice hands, it can cause damage fast.

Lisa Simon noted the obvious trap: if AI reinforces a gut instinct, people feel validated and bias snowballs. And as Sara Gutierrez said, speed without accuracy just gets you to the wrong answer faster.

Guardrails HR can implement now

  • Set policy beyond "human-in-the-loop." Define when to ignore or overturn AI suggestions. Document who has that authority.
  • Blind early screening. Remove names, photos, and identity markers before any scoring. Strip proxies (affinity groups, certain memberships) from AI inputs.
  • Force-justified acceptance. When a recruiter follows AI picks, require a short written rationale. Audit a random sample weekly.
  • Two-pass process. Use AI for admin tasks (dedupe, formatting). Human reviewers score with structured rubrics and weighted criteria.
  • Counterfactual testing. Run the same resume with different names. Track selection rate parity and adverse impact before go-live and monthly after.
  • Monitor alignment. If recruiters accept AI recommendations >80% of the time, trigger a review. High alignment can mask model bias.
  • Remove time pressure. Four-minute review windows push people to accept AI picks. Give space for judgment.
  • Vendor requirements. Demand bias evaluations, test datasets, and mitigation plans. Add fairness KPIs and penalty clauses to contracts.
  • Calibrate your team. Run short sessions showing real examples of AI-led bias. Practice saying "no" to flawed recommendations.

Questions to ask your vendors and team

  • What are selection rates across racial groups for matched resumes? Show the method, not just a summary.
  • How often do users accept model picks? Is there a threshold that triggers alerts?
  • Can the system run and report counterfactuals (same resume, different name)?
  • Is there a "neutral mode" that strips identity proxies from inputs and outputs?
  • How are logs stored and audited? Who can access them, and how long are they retained?

What not to do

  • Don't replace structured HR review with AI recommendations.
  • Don't assume "human oversight" fixes bias without training, time, and authority to disagree.
  • Don't scale pilots before running bias tests across your real job families.

Further reading

The study was presented at the AAAI/ACM Conference on AI, Ethics, and Society. You can explore the event and related work here: AAAI/ACM AIES.

Upskill your HR and TA teams

If you're adopting AI in recruiting, invest in skills that reduce risk and improve decisions. Explore practical learning paths here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
✨ Cyber Monday Deal! Get 86% OFF - Today Only!
Claim Deal →