Human resources and AI reach an ethical crossroads
Should an AI screen resumes before a recruiter looks at them? Is it fair to monitor productivity with desktop trackers? And what about measuring engagement and sentiment without employees knowing?
These are the questions studied by Salvatore Falletta, an HR executive-turned-academic and professor of the practice in the Department of Human and Organizational Development and Leadership, Policy and Organizations at Vanderbilt University. His work sits at the intersection of education, psychology and business, with a clear focus: use AI and people analytics in ways that help the workforce-without crossing ethical or legal lines.
People analytics: value with a human center
People analytics uses data to understand the workforce and inform decisions. Done well, it can improve hiring, performance, retention and employee experience. But there's a catch-tools must be deployed with intent, transparency and a human-in-the-loop.
Falletta's stance is direct: AI should never make a workforce decision by itself. Keep humans at the center, and know the data and assumptions behind every model you use.
- Use AI to listen to employee feedback in real time-then act on it.
- Let AI augment recruiters (e.g., resume triage), not replace judgment.
- Understand training data and model bias before deployment.
- Be transparent with employees about what you're measuring and why.
Avoiding "creepy analytics"
Some practices cross a line: facial-expression analysis in video interviews, screenshots and keystroke surveillance, or undisclosed tracking that erodes trust. Falletta argues people analytics can still be a force for good-if HR sets boundaries and stays transparent.
His book, "Creepy Analytics: Avoid Crossing the Line and Establish Ethical HR Analytics for Smarter Workforce Decisions," makes the case for guardrails that protect people while improving decisions.
Practical guardrails HR teams can implement now
- Human oversight: No fully automated employment decisions. Require review for high-stakes outcomes.
- Transparency and consent: Tell people what you collect, how it's used and the benefit to them.
- Data minimization: Collect only what's needed for a clear business purpose.
- Validation and fairness: Validate tools for the job and monitor adverse impact over time. See EEOC guidance on AI in employment selection here.
- Explainability: Ensure you can explain key factors behind algorithmic recommendations.
- Vendor due diligence: Demand model documentation, bias testing results and update cadence.
- Governance: Stand up an HR-legal-IT review board for any new AI tool.
- Security and retention: Protect data, set retention limits and plan for de-identification where feasible.
- Pilot first: Start small, measure outcomes, and gather employee feedback before scaling.
- Risk framework: Use structured approaches like NIST's AI Risk Management Framework guide.
What HR can ship this quarter
- Publish an AI in HR policy and add disclosures to candidate and employee communications.
- Run a bias audit on your screening and assessment tools; fix issues before the next hiring cycle.
- Eliminate intrusive monitoring that can't be justified by risk or performance outcomes.
- Train recruiters, HRBPs and people managers on ethical AI use and escalation paths.
- Set KPIs for AI-enabled processes (quality of hire, time-to-fill, turnover, eNPS) and review monthly.
Where the research is heading
Falletta's next focus compares what drives engagement for leaders versus employees as a whole. His view: those drivers differ-and those differences change how we design programs that actually move the needle.
Vanderbilt's HOD program is built for this kind of work, bringing together psychology, education, adult learning, technology, leadership and business. That mix creates space for practical research HR can use.
Helpful next step
If your HR team needs to upskill on AI tools and workflows, explore curated programs by role at Complete AI Training.
Your membership also unlocks: