Committee presses Government and Ofcom on action against AI intimate deepfakes
9 January 2026 - The Science, Innovation and Technology Committee has pressed Ofcom and the Department for Science, Innovation and Technology (DSIT) for urgent clarity on action to address AI-generated intimate deepfakes. This follows reports that xAI's Grok has produced sexualised images of women and children on X without consent. Ofcom has confirmed "urgent contact" with xAI.
Why this matters
There are clear gaps between what the law intends and what is currently enforceable. The Data Act provision criminalising the creation of non-consensual intimate images using AI is not yet in force, despite Royal Assent in July 2025. The Government also announced plans to ban "nudification" tools in December 2025, but no firm timetable has been set.
Questions put to Ofcom
- Why has Ofcom not opened a formal investigation or taken enforcement action in light of the reports?
- Does Ofcom currently have sufficient powers to address intimate deepfakes generated by AI tools on regulated services?
- How does Ofcom interpret current and upcoming legislation in this space, including interactions with the Online Safety Act and the Data Act?
- What was discussed in Ofcom's "urgent contact" with xAI, and what outcomes are expected?
Questions put to DSIT
- Will the Government amend the Online Safety Act to explicitly cover generative AI, as recommended by the Committee in 2025 and previously rejected?
- When will the promised ban on "nudification" tools be introduced, and how will it be enforced in practice?
- What is the commencement timetable for the Data Act provision criminalising non-consensual intimate AI images?
The Committee has requested replies from both Ofcom and DSIT by 16 January.
Chair comment
"Reports that xAI's Grok has been used to create non-consensual sexualised deepfakes on X are extremely alarming. My committee warned last year that the Online Safety Act was riddled with gaps - including its failure to explicitly regulate generative AI. Recent reports about these deepfakes show, in stark terms, how UK citizens have been left exposed to online harms while social media companies operate with apparent impunity.
"I've written to both the government and Ofcom seeking urgent clarity on how they will tackle the rapid rise of these AI-generated intimate deepfakes. We need transparency on Ofcom's conversations with xAI and a clear explanation of whether it has the powers to take effective enforcement action. The Government must also set out when it will finally introduce the promised ban on nudification tools and take the steps needed to protect women and children online."
Implications for departments and regulators
- Confirm enforcement pathways: map which harms fall under the Online Safety Act and where additional powers or directions are needed for generative AI outputs.
- Set commencement dates: provide a clear timeline for bringing the relevant Data Act provision into force.
- Operationalise the nudification ban: define prohibited functionality, platform responsibilities, reporting thresholds, and penalties.
- Coordinate with law enforcement and victim support services to ensure rapid takedown, evidence preservation, and redress for victims.
- Publish engagement logs with major AI providers and platforms to demonstrate compliance expectations and escalation routes.
What officials can do now
- Prepare draft guidance for platforms on handling intimate deepfakes, including prompt detection, user reporting friction, and response SLAs.
- Stress-test Ofcom's current powers against realistic AI-generated harm scenarios; identify gaps that require secondary legislation or swift parliamentary time.
- Develop a comms plan for victims and the public outlining rights, reporting channels, and expected response times.
Further reading
Your membership also unlocks: