Human recruiters are mirroring AI bias. Here's how HR can respond
A new University of Washington study tested 528 participants across 16 roles using simulated large language models to recommend hires. When the AI leaned biased, people followed it - even when the bias was severe.
As lead researcher Kyra Wilson put it, "Unless bias is obvious, people were perfectly willing to accept the AI's biases." That should get every HR leader's attention.
What the study found
- Without AI - or with a neutral AI - participants selected White and non-White candidates at equal rates.
- With moderately biased AI, participants' choices mirrored the model's skew (toward either White or non-White candidates).
- With severely biased AI, participants still followed the model's suggestions ~90% of the time, though they were slightly less biased than the AI.
- A quick intervention helped: bias dropped 13% among participants who took an implicit association test at the start.
Aylin Caliskan, associate professor at UW's Information School, added a reminder that applies to every recruiter using AI: "People have agencyβ¦we shouldn't lose our critical thinking abilities when interacting with AI."
Why this matters for HR
AI is already embedded in recruiting. One report shows 65% of recruiters use AI today, and 52% plan to invest more. Another survey found a third of U.S. workers think hiring at their companies will be fully run by AI by 2026.
That adoption curve collides with real legal, ethical, and brand risks. If humans mirror AI bias, "AI-assisted" can quickly turn into "AI-amplified" discrimination. The fix isn't to ditch AI - it's to add proof, controls, and accountability.
Immediate actions to reduce bias
- Demand evidence from vendors: Ask for recent bias audits, adverse-impact analyses, and performance by subgroup on relevant job families. No artifacts, no purchase.
- Set decision rules: Require structured rubrics, reason codes for each recommendation, and written human justification before moving candidates forward.
- Blind early screens: Hide names, schools, and location during initial review. Use structured scoring tied to job-relevant skills.
- Test with counterfactual resumes: Swap demographics on identical resumes and measure shifts in recommendations.
- Measure fairness continuously: Track selection rates, offer rates, and quality-of-hire by subgroup. Monitor the four-fifths rule and investigate gaps.
- Tune or constrain the model: Lower randomness, remove non-predictive signals (e.g., prestige proxies), and lock prompts to job-relevant criteria.
- Human-in-the-loop with teeth: Two-person review on AI-assisted decisions, especially when the model flags "strong yes" or "strong no."
- Train decision-makers: Run short calibration sessions and an implicit association test before hiring cycles; the study's 13% drop is worth the 10 minutes.
- Keep an audit trail: Log prompts, model versions, inputs, outputs, and final decisions. If you can't replay it, you can't defend it.
- Create a kill switch: If drift or bias is detected, pause the tool, revert to baseline process, and issue a postmortem.
Compliance and policy checklist
- Candidate notices and rights: California's new rules will require pre-use notices and consumer information request handling for automated hiring tools, plus risk assessments (effective January 2027). Review the regulator's updates at the California Privacy Protection Agency.
- EEOC alignment: Review guidance on AI, algorithmic fairness, and selection procedures; document your adverse-impact testing. See the EEOC's AI resources.
- Vendor contracts: Bake in audit rights, bias reporting, data retention limits, and notification of model updates that affect outcomes.
- Internal policy: Define approved use cases, prohibited inputs, fairness thresholds, and escalation paths when metrics breach limits.
How to choose or improve your hiring AI
- Start with the job, not the model: Map must-have skills and observable signals. Your rubric should lead the AI, not the other way around.
- Validate on your data: Run a backtest using past hires and outcomes. Check subgroup performance and adjust the feature set before production.
- Prefer transparency: Models that provide explanations and configurable criteria are easier to audit and govern.
- Pilot with a control group: A/B test against your current process, then roll out gradually with monitoring.
Upskill the team
Your recruiters don't need to be data scientists, but they do need prompt discipline, bias awareness, and a simple measurement playbook. If you're building capability, consider structured training that blends practice with policy.
For curated options by role, see Complete AI Training: Courses by Job.
Bottom line
AI doesn't remove bias by default. It reflects what it's fed - and people often go along with it. Keep the tech, but pair it with clear rules, routine measurement, and trained humans who are willing to disagree with the model.
That's how you get speed without sacrificing fairness - and how you keep regulators, candidates, and your brand on your side.
Your membership also unlocks: