AI Hiring Tools Under Fire: Evidence Mounts of Discrimination at Workday, Amazon, and Beyond

AI hiring tools like Workday’s face allegations of bias against race, age, and disabilities. Without careful oversight, these systems risk perpetuating discrimination in recruitment.

Published on: Jul 06, 2025
AI Hiring Tools Under Fire: Evidence Mounts of Discrimination at Workday, Amazon, and Beyond

Workday and Amazon’s AI Hiring Biases Highlight Risks of Discrimination

Recent concerns about AI-assisted hiring tools reveal how these systems may unintentionally deepen existing biases in recruitment. Workday, a leading workplace management software firm, faces allegations that its AI platform discriminates against job applicants based on race, age, and disabilities. Experts warn that if AI models are not carefully designed to address bias, they risk perpetuating and even worsening patterns of exclusion.

Despite AI’s promise to streamline hiring and broaden opportunities, the technology can embed long-standing prejudices. In 2024, nearly all Fortune 500 companies—492 to be exact—use applicant tracking systems, yet improper development can lead to biased outcomes that affect candidates unfairly.

Evidence of Bias in AI Hiring Tools

A study from the University of Washington Information School examined AI resume screening across nine occupations using 500 applications. The results showed a strong preference for white-associated names in 85.1% of cases and a significant underrepresentation of female-associated names, only favored in 11.1% of cases. In some scenarios, Black male candidates were disadvantaged in up to 100% of screenings compared to white males.

These findings suggest a feedback loop where biased data trains biased models, worsening discrimination over time. Workers have also reported real-world discrimination. Five plaintiffs over 40 years old filed a collective lawsuit against Workday, claiming its AI caused repeated job rejections based on protected characteristics. Workday denies these claims and emphasizes human oversight in hiring decisions.

Broader Concerns Beyond Hiring

Amazon faces criticism from employees with disabilities, who claim the company violated the Americans with Disabilities Act by using AI-driven processes that failed to provide appropriate accommodations. Amazon maintains that AI does not make final decisions about accommodations and follows strict guidelines to ensure fairness.

Why AI Hiring Tools Can Be Biased

AI systems rely heavily on the data they are trained on. If training data reflects historical hiring patterns favoring certain demographics, the AI will likely replicate those biases. Elaine Pulakos, CEO of a talent assessment developer, explains that without careful data management and oversight, AI tools can produce unpredictable and unfair results.

Human biases embedded in data are at the root of AI discrimination. A meta-analysis of 90 studies across six countries found that employers called back white applicants 36% more than Black applicants with identical resumes, highlighting persistent systemic bias. When AI scales these decisions, the impact can amplify.

Victor Schwartz from a remote work job platform notes the double-edged nature of AI: it can either enforce fairness at scale or institutionalize discrimination more efficiently than human recruiters.

Addressing AI Bias in Hiring

Current laws like Title VII of the Civil Rights Act prohibit intentional and disparate impact discrimination, but specific regulations on AI employment discrimination are lacking. Disparate impact refers to policies that unintentionally disadvantage protected groups, which can apply to AI tools screening candidates.

However, enforcement is complicated. Recent moves to reduce disparate impact oversight could limit agency efforts to investigate AI-related discrimination. Some local governments, like New York City, have enacted ordinances requiring bias audits before AI hiring tools can be used.

Employment lawyers emphasize transparency and the option for candidates to opt out of AI screening where possible. Meanwhile, firms that develop AI hiring tools advocate for human evaluation of AI outputs and ongoing audits to reduce bias.

Properly designed AI also holds potential to improve workforce diversity. Research shows women tend to apply for fewer jobs and only when fully qualified. AI-driven tools that streamline applications can lower barriers and create a fairer playing field.

Conclusion

AI in hiring presents both risks and opportunities. Without careful oversight, it may replicate and exacerbate discrimination already present in hiring data. However, with transparency, audits, and human involvement, AI tools can help create fairer hiring processes.

For professionals navigating AI in HR and legal contexts, staying informed and implementing bias mitigation strategies is essential. Training and resources on AI ethics and application can further support fair hiring practices. Explore AI courses and training options to better understand these challenges and solutions at Complete AI Training.