Government plan to tackle intimate AI deepfakes: what officials need to know
14 January 2026 - The Secretary of State for Science, Innovation and Technology, Liz Kendall, has outlined the UK Government's approach to non-consensual AI deepfakes and AI "nudification" tools. The plan centers on a targeted ban via amendments to the Crime and Policing Bill and on using the Online Safety Act to enforce against AI-enabled abuse.
This comes in response to the Science, Innovation and Technology Committee's request for detail following reports of Grok's intimate deepfakes and the government's December 2025 announcement to ban nudification tools. The government previously declined to adopt the committee's call to explicitly regulate generative AI platforms, but has pledged to address gaps in existing law.
Core measures
- A statutory ban on AI "nudification" tools that have one purpose: generating fake nude images or videos of real people without consent.
- Reliance on the Online Safety Act's existing duties to act against illegal content and harmful features, with further legislation if gaps persist.
- Continued monitoring of multi-purpose AI tools that could facilitate intimate deepfakes, with scope questions still to be clarified.
For reference: the Online Safety Act sets out platform duties and enforcement powers. Ofcom's implementation program provides operational detail for services in scope: Ofcom online safety.
Questions raised by the Committee
Dame Chi Onwurah welcomed the ban but pressed for clarity. She asked why action took months after reports of Grok-related deepfakes surfaced in August 2025, and whether a ban aimed at single-purpose apps will extend to multi-purpose tools like Grok.
She also urged government to put greater responsibility on platforms such as X and Grok, and to embed core principles-responsibility and transparency-into the online safety regime. As she put it: "No one should live in fear of having their image sexually manipulated by technology."
Timeline and oversight
- Legislative vehicle: amendments to the Crime and Policing Bill currently before Parliament.
- Regulator engagement: Ofcom has been asked for details on its work related to Grok's intimate deepfakes, with a response due 16 January.
- Next milestone: further guidance on scope and enforcement, including treatment of multi-purpose AI models and toolchains.
Implications for departments and public bodies
- Update risk registers: include intimate deepfakes and AI nudification as a defined threat category (staff, officials, and service users).
- Procurement checks: require vendors to disclose model capabilities, training filters, and safeguards that prevent image-based abuse.
- Incident response: set clear pathways with HR, legal, and comms for rapid takedown requests, evidence preservation, and victim support.
- Data handling: tighten policies on staff imagery, event photography, and social media to reduce misuse vectors.
- Platform engagement: prepare standardized notices for illegal content removal aligned to the Online Safety Act duties.
- Evidence for policy: collect anonymized case data to inform regulatory updates and enforcement priorities.
What needs clarifying
- Scope: whether the ban covers multi-purpose services that can generate intimate deepfakes among other outputs.
- Accountability: how duties will apply across model providers, app developers, and hosting platforms.
- Enforcement: notice-and-action timelines, penalties, and expectations for proactive detection and blocking.
Practical next steps
- Map exposure across departments and ALBs; identify high-risk roles and public-facing staff.
- Conduct tabletop exercises for deepfake incidents, including coordination with Ofcom and law enforcement.
- Brief senior leaders on legislative changes and readiness plans before the Crime and Policing Bill stage advances.
- Upskill digital, comms, and safeguarding teams on AI-enabled image abuse and platform escalation routes.
If your team needs structured upskilling on AI risks and policy, you can explore role-based options here: AI training by job function.
Key takeaways
- The government will move ahead with a focused ban on AI nudification tools and use the Online Safety Act as the enforcement backbone.
- The Committee is pressing for broader platform accountability, faster action, and clear coverage of multi-purpose tools like Grok.
- Departments should act now on risk, procurement, and incident response while legislative details are finalized.
Your membership also unlocks: