Keeping police one step ahead of criminals using AI
Published 7 October 2025
The Public Safety Group (PSG), part of the Home Office, asked the Accelerated Capability Environment (ACE) to give government a clear view of AI products-especially Generative AI-and the ways criminals might misuse them. Offending is already being sped up by tools that aid fraud, child sexual abuse material, non-consensual intimate images (NCII), and disinformation. The bigger risk: AI that provides step-by-step assistance across many crime types. The goal was simple-see the threat through the products before seeing it on the streets.
The brief
PSG wanted a current picture of the GenAI sector and the risks it poses now and over the horizon. The approach focused on assessing products and their misuse potential rather than waiting for crime trends to surface.
What ACE delivered
ACE built a capability map and baseline view of public products and markets across four areas:
- Image and video generators, including "nudification" apps used to create synthetic and deepfake NCII.
- Chatbots based on large language models that can be misused for malicious activities.
- Voice cloning tools, with a focus on fraud and impersonation.
- Data and predictive analytics that can help identify victims at scale and enable personalised social engineering.
ACE delivered a rapid baseline report and deeper analysis in each area to support a fast-moving policy agenda. The work identified key products from major and emerging providers, mapped associated risks and threats, and highlighted safety measures companies are using to prevent misuse. The result was a current, actionable view of AI products linked to criminal activity.
Ongoing horizon scanning
PSG also commissioned a monthly newsletter to track new AI products and their potential for criminal use. It is written for policing, crime, and AI practitioners, and now reaches more than 350 people across policing and law enforcement. This creates early warning, supports decisions, and keeps teams aligned on priority risks.
Why this matters for government
- Policy: Evidence to inform regulation, testing regimes, and voluntary commitments.
- Operations: Early signals to adapt tactics against AI-enabled fraud, NCII, and disinformation.
- Procurement: Clear criteria for safety-by-design and guardrails before adoption.
- Communications: Preparedness for deepfakes and coordinated disinformation incidents.
- Skills: Focused upskilling for analysts, investigators, and policy teams.
Practical steps for your team
- Stand up a product watchlist and triage process for new GenAI tools and high-risk features.
- Define unacceptable use cases and required safety controls before pilots or procurement.
- Run red-team tests against misuse scenarios in fraud, impersonation, NCII, and disinformation.
- Establish an information-sharing loop with specialists such as ACE and relevant partners.
- Create fast reporting channels from frontline officers and analysts to policy and comms.
- Invest in targeted AI literacy and tooling skills for law enforcement and policy staff. For structured learning paths by role, see Complete AI Training: Courses by Job.
Further reading
- Europol: Large Language Models and Law Enforcement
- C2PA: Content provenance standards for synthetic media
As the UK advances an AI safety approach grounded in regulation, testing, and voluntary regimes, AI-enabled offending will keep growing. Sustained scanning of products, realistic misuse testing, and rapid knowledge-sharing across policing and government are now essential to stay ahead.