UK ICO Sets Strategy for AI and Biometrics Regulation
The UK Information Commissioner’s Office (ICO) has published a new strategy outlining its approach to regulating artificial intelligence (AI) and biometric technologies. The plan focuses particularly on automated decision-making systems and the use of facial recognition by police forces.
Released on 5 June 2025, the strategy aims to support innovation while ensuring the protection of individuals’ data rights. The ICO will concentrate its efforts on areas where risks are highest but where there is also significant potential for public benefit. Key focus areas include recruitment and public service automated decision-making systems, police use of facial recognition, and the development of AI foundation models.
Actions Planned by the ICO
- Conduct audits and issue guidance on the lawful, fair, and proportionate use of facial recognition technology by police.
- Set clear expectations regarding the use of personal data for training generative AI models.
- Develop a statutory code of practice for organisations deploying AI technologies.
- Consult on updating guidance for automated decision-making profilers, working closely with early adopters such as the Department for Work and Pensions (DWP).
- Produce a horizon scanning report on agentic AI capable of autonomous actions.
Information Commissioner John Edwards emphasized that public trust depends not on new technologies themselves, but on responsible applications within necessary regulations.
Key Concerns and Focus Areas
The ICO strategy highlights transparency, explainability, bias, discrimination, and rights and redress as major public concerns. For AI models, the regulator will seek assurances from developers about how personal data is used, ensuring people remain informed. Regarding police facial recognition, the ICO plans to publish clear guidance on lawful deployment and conduct audits with published results to maintain public confidence.
Dawn Butler, vice-chair of the AI All Party Parliamentary Group (APPG), stated that AI will change many aspects of society, including healthcare, education, travel, and democracy. She stressed that fairness, openness, and inclusion must be fundamental to AI development.
Lord Clement-Jones, co-chair of the AI APPG, added that privacy, transparency, and accountability form the foundation of trust essential for AI’s advancement. He noted that as AI evolves from generative models to autonomous systems, the risks escalate, making the safeguarding of public trust and individual rights critical.
Public Trust and Adoption Challenges
Negative perceptions around AI and biometric use risk limiting their adoption. Trust is vital for public support and engagement with these technologies, particularly concerning police biometrics, recruitment algorithms, and AI determining welfare eligibility.
In 2024, only 8% of UK organisations reported using AI decision-making tools with personal data, and 7% used facial or biometric recognition, both showing only slight increases from the previous year. The ICO’s goal is to empower organisations to use these technologies lawfully, boosting public trust. However, the regulator will not hesitate to act against organisations that misuse personal data or evade responsibilities.
Calls for Clearer Legal Frameworks
Recent analyses, including one by the Ada Lovelace Institute in May 2025, highlight significant gaps and fragmentation in governance of biometric surveillance technologies. The report calls for clearer legal frameworks and effective oversight across all biometric applications, such as fingerprint payments in schools, emotion recognition systems, and supermarket facial recognition for shoplifting prevention.
Parliament and civil society have repeatedly urged new laws to govern policing biometrics. Multiple inquiries and reviews—from the Lords Justice and Home Affairs Committee, former biometrics commissioners, the Equalities and Human Rights Commission, and the House of Commons Science and Technology Committee—have addressed these concerns. The Matthew Ryder QC legal review also looked at private sector uses, including workplace monitoring and public-private partnerships.
The ICO’s strategy marks a step toward balancing innovation with protection of data rights, aiming to build public trust and ensure that AI and biometric technologies are used responsibly.
For those interested in gaining a deeper understanding of AI and its ethical use, exploring practical training resources can be helpful. Relevant courses are available at Complete AI Training.
Your membership also unlocks: