ICO’s New AI Strategy Targets Recruitment Tech and Biometric Use
The Information Commissioner’s Office (ICO) is increasing oversight of AI and biometric technologies, with a focus on how automatic decision-making (ADM) is used in recruitment. This new approach aims to ensure personal data is handled responsibly, helping organisations innovate while maintaining public trust.
At an event with the all-party parliamentary group for AI (AI APPG), Information Commissioner John Edwards highlighted the importance of safeguarding personal information as AI technologies grow. He emphasized that trust depends on clear protections, especially as AI systems become more autonomous.
Why This Matters to HR Professionals
Research shows people want transparency about when and how AI affects their job applications. Concerns arise when automated systems make flawed decisions or when biometric tools like facial recognition are used inaccurately. Over half of those surveyed worry about privacy infringements if facial recognition is used by police.
For HR teams, this means increased scrutiny on AI tools used for recruitment. Organisations will need to demonstrate lawful, fair, and proportionate use of these technologies to meet regulatory expectations and maintain candidate trust.
Key Actions in ICO’s AI and Biometric Strategy
- Reviewing the use of ADM in recruitment and collaborating with early adopters to set best practices
- Auditing and providing guidance on the fair use of facial recognition technology (FRT)
- Setting clear rules for using personal data to train generative AI models
- Developing a statutory code of practice for responsible AI deployment
- Monitoring emerging risks, including the rise of agentic AI—AI systems that act autonomously
This strategy encourages organisations to innovate responsibly while protecting candidates' and employees' privacy.
Understanding Agentic AI
Agentic AI refers to AI systems that act on their own, making decisions based on user goals without constant human input. Unlike generative AI, which creates content, agentic AI performs tasks—like placing an order online using stored ID and payment details.
As these systems grow more capable, the ICO plans to closely examine their impact on data protection and individual rights, ensuring safeguards keep up with technological advances.
Voices from Parliament and Industry
Lord Clement-Jones, co-chair of the AI APPG, stressed that trust is the foundation of AI progress. He pointed out that privacy, transparency, and accountability are essential for innovation that respects individual rights.
Dawn Butler, Labour MP and vice chair of the AI APPG, highlighted that AI affects society broadly—including healthcare, education, and democracy—and must be fair and inclusive.
What HR Should Do Now
Human Resources professionals should prepare for increased regulation around AI tools in recruitment. It’s crucial to:
- Ensure transparent communication with candidates about AI’s role in hiring decisions
- Conduct regular audits of AI systems to check for bias and accuracy
- Stay updated on ICO guidance and codes of practice to maintain compliance
- Consider training options on AI and data protection to better manage these technologies
For HR teams looking to deepen their understanding of AI tools and responsible use, exploring courses tailored for various skill levels can be valuable. Resources such as Complete AI Training’s HR-focused AI courses offer practical insights into integrating AI thoughtfully in recruitment processes.
Looking Ahead
The ICO’s strategy marks a clear signal: AI in recruitment and biometric tech will face tighter scrutiny. Organisations that act now to align with these expectations will build stronger candidate trust and avoid regulatory challenges.
Staying informed and proactive is key. As AI tools evolve, so will the standards for their ethical use—making responsible AI adoption a priority for every HR professional.
Your membership also unlocks: