Inclusion-focused AI reduces disability bias in hiring decisions, Macquarie study finds

Inclusion-focused AI nearly doubled hiring rates for disabled candidates in complex scenarios, a Macquarie Business School study of 238 HR professionals found. Standard AI showed no such effect-tool design determines whether bias shrinks or persists.

Categorized in: AI News Human Resources
Published on: Apr 11, 2026
Inclusion-focused AI reduces disability bias in hiring decisions, Macquarie study finds

Inclusion-Focused AI Reduces Disability Bias in Hiring, Study Shows

A Macquarie Business School study found that specially designed AI tools can significantly reduce discrimination against disabled candidates in recruitment, even in complex hiring scenarios where bias typically strengthens.

Researchers tested two approaches: standard AI focused on efficiency and technical criteria, and inclusion-focused generative AI that actively guides decision-makers toward fairness. The difference was substantial.

Where bias takes hold

Disability discrimination remains persistent in hiring despite growing diversity efforts. The problem intensifies when hiring decisions become complex.

In an experiment with 238 HR professionals, disabled candidates were selected 34 percent of the time during complex hiring scenarios, compared to a neutral benchmark of 50 percent. In simpler decisions, the gap narrowed significantly.

This pattern reflects how the human brain works under pressure. When decisions are straightforward, managers focus on concrete skills and qualifications. As complexity increases, people rely more on mental shortcuts and stereotypes.

How inclusion-focused AI performs differently

Standard AI tools don't automatically fix bias. Inclusion-focused AI takes a different approach: it prompts evaluators to focus on job-relevant competencies, highlights fairness considerations, and reduces reliance on stereotypes.

In complex hiring decisions, inclusion-focused AI nearly doubled hiring rates for disabled applicants compared to standard AI. Even in simpler decisions, it consistently reduced bias.

The mechanism draws on Construal Level Theory, which explains how psychological distance affects decision-making. When decisions feel distant or abstract, people think in broad, simplified terms-where stereotypes dominate. Inclusion-focused AI reduces that distance by keeping attention on concrete details: specific skills, qualifications, and evidence about individual merit.

Implementation requires careful calibration

The research identified one risk: in some scenarios, inclusion-focused AI appeared to overcorrect, leading to higher-than-neutral selection rates for disabled candidates. This raises the possibility of inverted bias.

This doesn't eliminate the benefits, but it signals the need for careful setup and monitoring.

To build effective fairness infrastructure, AI tools should:

  • Prompt evaluators to focus on job-relevant competencies
  • Embed diversity and inclusion principles into decision workflows
  • Make reasoning transparent and auditable
  • Support rather than replace human judgment

The goal is not to remove humans from hiring decisions but to structure the process so bias has less room to operate. AI becomes one component alongside structured interviews, standardized criteria, and accountability processes.

For HR professionals implementing AI in recruitment, the takeaway is straightforward: not all AI tools address bias equally. The design matters. AI for Human Resources requires intentional focus on fairness, not just efficiency.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)