AI Hiring Tools Face New Legal Scrutiny as Laws and Lawsuits Mount

Over half of businesses use AI for hiring, but legal risks around bias and transparency grow. New laws like Colorado’s AI Act require disclosure and anti-discrimination measures.

Categorized in: AI News Legal
Published on: Jun 21, 2025
AI Hiring Tools Face New Legal Scrutiny as Laws and Lawsuits Mount

Legal Changes Impacting AI Use in Candidate Recruiting and Screening

More than half of businesses—55%, according to a University of Southern California study—are investing in AI-driven recruiting tools. These tools streamline hiring by speeding up candidate screening, easing HR workloads, and lowering costs. Yet, the use of AI in hiring brings legal risks, especially concerning bias and transparency.

There are growing concerns that AI screening tools may discriminate against candidates based on race, gender, age, or disability. Lawsuits have already been filed alleging such discrimination, highlighting the need for employers to carefully evaluate their AI systems.

Key Litigation: Mobley v. Workday, Inc.

In a notable case, Mobley v. Workday, Inc., a federal court in California allowed a class action lawsuit to proceed against Workday, a software provider of AI-based applicant screening tools. The plaintiff claimed he was denied 80 to 100 job opportunities due to bias embedded in Workday’s algorithm.

The court ruled that Workday could be considered an “employer” under Title VII, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA) because it acts as an agent for its clients. The case remains in discovery, emphasizing the importance for employers to understand how AI tools operate and to be ready to demonstrate their compliance with anti-discrimination laws.

Emerging Legislation on AI Employment Screening

While courts have yet to definitively rule on whether AI screening tools violate federal or state anti-discrimination laws, several states are enacting legislation to regulate AI use in employment decisions.

  • Colorado Artificial Intelligence Act (CAIA): Effective February 1, 2026, this law requires disclosure when consumers interact with AI systems and mandates protections against algorithmic discrimination. It applies broadly to any employer using high-risk AI systems affecting Colorado residents, regardless of the employer’s physical location.
  • Illinois Human Rights Act Amendments: From January 1, 2026, employers in Illinois are prohibited from using AI tools that discriminate against employees based on protected classes.

On the federal level, a proposed amendment called the One Big Beautiful Bill Act seeks to ban state and local AI regulations temporarily to establish uniform federal oversight. However, this proposal faces bipartisan criticism and is unlikely to pass the Senate.

Practical Recommendations for Employers

Employers deploying AI in recruiting and screening should:

  • Stay updated on state-specific AI regulations affecting employment practices.
  • Understand the screening algorithms and data inputs their AI tools rely on.
  • Be prepared to explain the role AI plays in hiring decisions clearly.
  • Document efforts to prevent disparate impacts on protected groups.
  • Consult legal counsel to ensure compliance and reduce litigation risks.

As AI tools become more ingrained in hiring, maintaining transparency and fairness is critical. Employers who proactively address these issues will be better positioned to avoid legal challenges.

For more information on AI compliance and training, explore resources at Complete AI Training.