AI hiring is now a legal risk. Are you up to speed?
AI can help HR teams move faster. It can also put your company on the wrong side of employment and consumer law if deployed without guardrails.
A proposed class action filed in California in January against Eightfold AI signals a new phase. Allegations center on undisclosed algorithmic scoring, opaque data use, and a lack of applicant access or recourse. Outside the U.S., similar systems risk conflict with the EU Artificial Intelligence Act.
The case at a glance
In Kistler & Bhaumik v. Eightfold AI, the complaint alleges the company generated hidden candidate scores used by major employers, including Microsoft, PayPal, Salesforce and Bayer, to screen applicants. Two women say they were denied STEM roles without any visibility into the AI's evaluation or a way to challenge it. Whether the case succeeds or not, the takeaway for employers is clear: AI in hiring is a compliance, fairness, and trust issue.
What the complaint alleges
- Applicants were scored from 0-5 on "likelihood of success," and those scores informed screening decisions.
- Systems allegedly pulled sensitive data: social profiles, location data, device activity, cookies, and other tracking signals.
- Applicants received no disclosure, consent, explanation, or chance to correct errors - and no copy of the report that affected their prospects.
The filing emphasizes there is no AI exemption in existing laws. If an algorithmic ranking is deemed a "consumer report" under the Fair Credit Reporting Act (FCRA), opaque AI hiring tools could face strict obligations.
Three key risks legal teams should flag now
- Compliance risk: Undisclosed or unreviewable AI increases exposure under consumer-protection and fair-assessment rules.
- Data risk: Tools that ingest social, location, device, or browsing data trigger high-stakes privacy duties and cross-border concerns.
- Bias and fairness risk: Black-box scoring can screen out qualified candidates, inviting claims and damaging employer brand.
Five actions to reduce AI hiring risk
- 1) Build transparency into the process. Tell applicants when AI is used, what it evaluates, and what data it relies on. Add plain-language notices to job posts and application flows. Keep documentation ready for regulators and counsel.
- 2) Keep a human in the loop. Use AI as input, not final say. Require human review before rejection or advancement, and log the rationale. Train reviewers on how the tool works and its limits.
- 3) Guarantee applicant rights: access, explanation, correction. Provide candidate-facing reports or summaries, explain how scores affected decisions, and enable corrections. Route disputes to a human reviewer and record outcomes.
- 4) Minimize data to what's job-relevant. Disable social scraping, location tracking, device data, and cookies unless strictly necessary and defensible. Validate inputs against the job's essential functions and document that validation.
- 5) Choose tools built for employment and consumer law. Require audit trails, explainable scoring, bias testing, and compliance artifacts (e.g., data maps, retention schedules, impact assessments). If the vendor can't explain a decision to a candidate and to your leadership, your legal team won't be able to defend it.
Practical next steps for in-house counsel
- Inventory every AI touchpoint in hiring (sourcing, screening, interviewing, assessments) and classify decisions that could be deemed "adverse."
- Update notices, consent flows, and candidate communication templates to reflect AI use and rights to access and correction.
- Run bias and validation testing with repeatable methods; retain results and remediation logs.
- Rework vendor contracts: require disclosure of data sources, explainability, testing cadence, audit rights, and incident reporting.
- Stand up an appeal process for AI-influenced decisions with tracked SLAs and human review.
A better path forward
Faster hiring is possible without hidden scoring, shadow data collection, or unreviewable decisions. The future favors AI systems that are transparent by design, human-guided, and aligned with employment and consumer law from day one.
The Kistler & Bhaumik v. Eightfold AI filing is likely a preview of wider scrutiny. Treat AI hiring like credit reporting or background checks: document it, test it, explain it, and give candidates a way to challenge it. Teams that do this now will lower risk and earn more trust from applicants.
For ongoing guidance built for legal professionals, see AI for Legal.
Your membership also unlocks: