AI Hiring Scores Face a Legal Test: What HR Needs to Know
Applying for a job is hard enough. Now candidates are asking if an algorithm quietly filtered them out before a human ever looked. A new lawsuit aims to pull back the curtain-and it could reshape how HR teams use AI screening tools.
The case argues that automated applicant "scores" should be treated like credit checks and fall under the Fair Credit Reporting Act (FCRA). If a court agrees, common AI hiring practices may need immediate changes.
The case at a glance
A proposed class action was filed in California by two women working in STEM who say they were screened out by AI systems despite being qualified. "I've applied to hundreds of jobs, but it feels like an unseen force is stopping me from being fairly considered," said plaintiff Erin Kistler. With more employers adopting AI, this isn't an edge case-roughly 88% of companies use AI for initial candidate screening, according to the World Economic Forum.
The lawsuit targets Eightfold, a human resources AI company whose tools generate a "match score" from 0 to 5 to predict fit between a candidate and a job. The claim: those scores function as "consumer reports" because they aggregate personal data and influence employment decisions.
Why the FCRA angle matters
If an AI-generated score is treated like a consumer report, HR teams and vendors would need to follow FCRA rules. That typically includes notifying applicants, getting consent before generating a report, and giving candidates a chance to dispute inaccuracies. Failure to do so can trigger legal exposure, even if the employer relied on a vendor.
Practically, that means rethinking consent flows, documentation, adverse action procedures, and vendor contracts-especially for automated scoring and ranking tools.
Eightfold's response
Eightfold disputes the allegations. A company spokesperson said its platform uses data intentionally shared by candidates or provided by customers, does not scrape social media, and is committed to responsible AI, transparency, and legal compliance.
What HR leaders should do now
- Map every point where AI influences applicant screening, ranking, or eligibility. Identify which tools produce a score or recommendation used in hiring decisions.
- Ask vendors if their tools could be considered "consumer reports" and whether they support FCRA workflows (consent, disclosures, dispute handling, data corrections, and adverse action notices).
- Implement a clear dispute and correction process for candidates, even if your vendor handles the technical piece.
- Build human-in-the-loop reviews for borderline or high-impact decisions to reduce blind spots and bias risk.
- Audit inputs: confirm data sources, refresh cycles, and how skills are inferred. Remove stale or irrelevant attributes.
- Test for adverse impact and document findings. Re-test after model updates or major hiring changes.
- Update offer letters, career site language, and privacy notices to reflect automated decision-making where applicable.
- Train recruiters and hiring managers on appropriate AI use, escalation paths, and candidate communication.
- Tighten contracts: require compliance with applicable laws, transparency on model behavior, support for audits, and indemnification where appropriate.
Questions to ask your vendors
- What data feeds the score? How is it validated for accuracy and job relevance?
- Can candidates see, dispute, and correct the data that informs their score?
- Do you support employer FCRA obligations, including consent and adverse action?
- How do you test for and mitigate bias? How often are models retrained?
- Can we opt out of high-risk inputs (e.g., inferred attributes) without breaking the product?
- What logs, explanations, or audit trails are available for each decision?
Key legal resources
- FTC: Using Consumer Reports-What Employers Need to Know
- EEOC: Assessing Adverse Impact of AI in Employment
The bottom line
This lawsuit doesn't ban AI in hiring. It pressures the industry to treat automated scores with the same care as credit checks. For HR, the safest move is to act as if FCRA-style safeguards already apply-because soon, they might.
If your team needs practical upskilling on evaluating AI tools and workflows, explore role-based options at Complete AI Training.
Your membership also unlocks: