AI Job Interviews in Australia May Discriminate Against Accents and Disabilities, Study Warns

AI recruitment tools risk bias against candidates with accents or speech disabilities due to unrepresentative training data. HR professionals face challenges ensuring fairness and transparency.

Categorized in: AI News Human Resources
Published on: May 15, 2025
AI Job Interviews in Australia May Discriminate Against Accents and Disabilities, Study Warns

AI Job Interviews: A New Discrimination Risk for HR Professionals

Artificial intelligence is increasingly used in recruitment, but recent Australian research raises concerns about the risks of bias against certain job candidates. Those with accents or speech disabilities may face unfair treatment when interviewed by AI systems.

Growth of AI in Recruitment

Globally, AI use in hiring is on the rise. A survey by AI recruitment firm HireVue found that 72% of 4,000 employers used AI tools in 2025, up from 58% the year before. In Australia, AI adoption is currently estimated at around 30%, but it is expected to increase significantly within five years.

Biases in AI Training Data

AI recruitment systems rely heavily on the datasets they are trained with. Australian research indicates these datasets often reflect American demographics more than local ones. For example, one company noted only 6% of its training data came from Australia or New Zealand, and 36% of its dataset consisted of white applicants. This imbalance risks skewing AI assessments.

Speech recognition accuracy also varies. The same company reported a word error rate below 10% for native English speakers in the US, but this increased to 12–22% for non-native speakers with accents, such as those from China. Misinterpretation of speech can directly affect candidate ratings.

Consequences for Candidates and Recruiters

HR professionals interviewed in the study shared concerns that AI systems might inaccurately transcribe or evaluate candidates who are non-native speakers or have speech disabilities. Some sought assurances from software vendors about fairness, but these were often vague and unsupported by data.

Transparency is a major issue. Recruiters themselves often do not understand how AI decisions are made, making it impossible to explain outcomes or provide feedback to candidates. This lack of clarity complicates accountability for potential discrimination.

Legal and Regulatory Landscape

No AI discrimination cases have yet reached Australian courts, though complaints must first be addressed by the Australian Human Rights Commission. Past incidents show the risks: in 2022, 11 promotion decisions at Services Australia were overturned after AI-based selection methods were found to be flawed.

Experts suggest that new regulations, such as a dedicated AI act, could help mitigate these risks. Strengthening existing discrimination laws to cover AI hiring tools may also be necessary to protect applicants and guide employers.

What HR Professionals Should Do Now

  • Be cautious when adopting AI recruitment tools and understand their limitations.
  • Request data on how AI systems perform with diverse candidate groups.
  • Advocate for transparency from AI vendors about their algorithms and training data.
  • Prepare to support candidates who may be disadvantaged by AI assessments.
  • Stay informed on evolving legal requirements regarding AI and discrimination.

For HR professionals looking to deepen their knowledge on AI tools and ethical implementation, exploring focused AI courses can provide valuable insights. Resources like Complete AI Training's latest courses offer practical guidance tailored to recruitment and HR challenges.