UK deepfake law delay collides with Grok AI backlash
Campaigners say the government is dragging its feet on a law that would criminalise the creation or requesting of non-consensual sexualised deepfakes. The row escalated after users of Elon Musk's Grok AI used prompts to undress women in images and post the results back at them.
Right now, it is an offence in the UK to share intimate images, including deepfakes, without consent. A new offence to criminalise creating or commissioning "purported intimate images" passed in June 2025, but it has not been brought into force, leaving a gap that is being exploited in plain sight.
What's happening on X and Grok
Grok can be accessed via its site, app, or by tagging "@grok" on X. Users have been asking it to undress women or place them in sexualised poses; some victims say they've faced waves of hundreds of images and have stopped reporting due to the mental strain.
X said that anyone using or prompting Grok to make illegal content will face the same consequences as if they uploaded illegal content. The Prime Minister called the images "disgraceful" and "disgusting," adding that "X has got to get a grip on this," while Ofcom said it had made urgent contact with X and xAI and is investigating.
Where the law stands
The Ministry of Justice says it is already an offence to share intimate images online without consent, including deepfakes. The Data (Use and Access) Act 2025 created an offence for creating or commissioning purported intimate images, but commencement has not happened, and no date has been set.
Experts say the uncommenced offence would likely cover a portion of the images seen with Grok. Advocates argue the delay puts women and girls in harm's way and chills participation online. Several peers have criticised the government for "repeatedly" delaying the rules and urged immediate action.
The enforcement gap
Without commencement, enforcement leans on existing offences: sharing intimate images without consent, images depicting children, and other communications or harassment laws. Grey areas remain around "unclothing" edits that are sexualised but may not be categorised as pornographic, and around solicitation of creation when the creation offence isn't live.
Platforms can act under their own policies, but policy without tooling and moderation at scale is weak. Victims face high reporting friction while content can propagate faster than takedowns.
What government and legal leaders can do now
- Commence the creation/commissioning offence without delay, and publish commencement guidance with clear examples, thresholds, and evidential notes.
- Issue interim CPS/NPCC guidance on charging under existing offences for "unclothing" and sexualised edits to reduce hesitancy at the front line.
- Direct Ofcom to publish a rapid compliance plan for X and similar services, including timelines, reporting SLAs, and penalties for non-compliance.
- Mandate data preservation for reports of intimate image abuse so investigators can obtain prompt logs, prompts, and model outputs before deletion.
- Require default user protections: opt-out by default from AI edits on profile photos; block "undress" prompts; warn, rate-limit, and suspend repeat offenders.
- Fund victim support and legal aid pathways for swift image removal, evidence capture, and mental-health support.
- Stand up a cross-government task group (MoJ, DSIT, Home Office, Ofcom) to coordinate comms, metrics, and weekly progress until commencement.
- Publish a model protocol for public bodies on evidence collection from AI systems (prompt logs, model IDs, timestamps, hashes) to support prosecutions.
Platform actions that should be table stakes
- Blocklist sexualised edit prompts and "undress" intents; detect circumventions using safety classifiers.
- Disable edits of others' images by default; require explicit, logged consent from the depicted person for any transformation.
- Watermark all AI-edited images, attach tamper-evident metadata, and expose provenance to users and moderators.
- Route any intimate-image prompts to human-in-the-loop review; apply escalating penalties and law-enforcement referrals.
- Provide one-click reporting for intimate image abuse with 24-hour takedown targets and transparent case tracking.
Key references
For regulatory context, see Ofcom's online safety duties and enforcement approach: Ofcom. For charging guidance on intimate image abuse, see the Crown Prosecution Service: CPS guidance on disclosing private sexual images.
For legal teams upskilling on AI risk
If your department needs structured training on AI systems, safety controls, and policy enforcement, explore practical courses here: Complete AI Training - courses by job.
Bottom line
The offence banning creation and commissioning of sexualised deepfakes exists on paper. Until it's switched on and backed by clear guidance and platform obligations, victims carry the cost.
Move fast on commencement, tighten enforcement, and force product changes that make abuse hard by default. That combination will do more than statements ever will.
Your membership also unlocks: