Are your HR AI tools "high-risk" under the EU AI Act?
HR teams are using AI for everything from transcribing interviews to drafting job ads and reviews. The big question: does using these tools make you a "high-risk" AI user under the EU AI Act? Often, no. But the answer depends on what the tool does and how it's used day to day.
What Annex III(4) covers
Under the AI Act, systems used to recruit or select candidates-or to make decisions that affect employment terms, promotion, termination, task allocation, or performance monitoring-can be classified as high-risk. The closer the tool gets to evaluating people or influencing outcomes, the higher the likelihood it's in scope.
Examples likely to be high-risk
- Automated CV/resume screening and filtering
- AI scoring of candidates (including psychometric or behavioral analysis)
- Interview chatbots that rate suitability
- Algorithms that target job ads to specific profiles
- Task allocation based on predicted productivity or outcomes
- Performance dashboards that score employees on reliability or efficiency
- Systems recommending or deciding promotions or terminations
- Gig/platform algorithms that rank and assign work
Examples likely outside Annex III(4)
- Payroll and benefits administration
- Basic time-tracking used only to record attendance
- Leave management tools
- Performance analytics used for org-level planning (not individual decisions)
A quick test to classify your tool
Start here: does the system determine or materially shape a hiring, promotion, termination, pay, or performance decision? If yes, it likely falls under Annex III(4). If it only assists a human and doesn't influence the outcome, it may be outside high-risk-provided that's true in practice, not just on paper.
- Purpose and functionality: Is the tool purely administrative or assistive (drafting, summarizing, documenting)? Or does it influence outcomes tied to promotion, termination, or pay?
- Automated evaluation or profiling: Does it infer traits like leadership potential, motivation, engagement, or "flight risk"? Are those outputs used to rank, score, or decide?
- Historical or predictive data: Does it use past reviews or workforce analytics to predict performance or behavior? Does it output scores, rankings, or risk flags that feed decisions?
- Human oversight and design control: Is there real review, edit, and sign-off-or is the AI output rubber-stamped? If the organization treats the AI output as authoritative, it can still be in scope.
Keep assistive tools low-risk
- Set clear guardrails in policies and training: use AI only for documentation and communication, not evaluation or decision-making.
- Avoid feature creep: don't let summarization or drafting features start influencing hiring, promotion, or performance outcomes.
- Document intended use, design choices, and oversight. Keep records to show the tool remains assistive and non-evaluative.
What's next
The European Commission plans to issue guidance on high-risk classification by 2 February 2026. Until then, document your intended uses, keep strong human oversight, and be ready to adjust practices as guidance lands.
Read the EU AI Act text (see Annex III(4))
Upskill your HR team
If your HR function is rolling out AI, build skills around oversight, policy, and practical guardrails. Explore curated options here: AI courses by job role.
Your membership also unlocks: