UK dragged its feet as Grok AI deepfakes spread, victims say

Jess Davies says slow UK enforcement let Grok-fuelled deepfakes spread, while X paywalled tools. New laws kick in this week as Ofcom probes and victims demand safeguards.

Categorized in: AI News Government
Published on: Jan 15, 2026
UK dragged its feet as Grok AI deepfakes spread, victims say

Quicker action could have stopped Grok AI deepfakes, victim says

Explicit AI-generated images of Welsh presenter Jess Davies were created and spread on X using Grok AI without her consent. She argues the UK government's delayed enforcement let harm compound, and accuses X owner Elon Musk of monetising image abuse by limiting Grok's image tools to paying users on the site while access reportedly remained via the app.

Ministers now say creating non-consensual intimate deepfakes will be illegal this week, closing a gap where sharing was a crime but generating them wasn't-despite legislation being ready since June 2025. Ofcom has launched an investigation into whether Grok breached online safety laws. Musk frames this as a speech issue; victims call it abuse at scale.

What happened

Davies spoke out on X after seeing a surge of non-consensual, intimate images. Soon after, AI-generated images of her appeared-made by strangers, without consent. "You become desensitised to seeing this graphic content," she said. "But you still can't prepare yourself to have your consent removed from you like this."

She says she could still create images of herself in lingerie and in a micro thong through the Grok app, arguing the system should block such outputs. "He bought a social media platform and he is responsible for the safety of users."

What is Grok AI?

Grok is X's AI assistant. It can reply to posts and edit images through an AI image feature. Reports indicate users created and publicly shared explicit images of real people-without consent-through Grok.

X said in a statement that anyone using or prompting Grok to make illegal content faces the same consequences as if they uploaded illegal content directly. The company has limited image features to paying users on the site, though reports suggest free creation remained possible through the app.

Why this matters for government

The delay between law being "ready" and law being "in force" created a six-month void. Victims were exposed, platforms monetised access, and enforcement lagged while harm scaled to huge audiences.

This isn't abstract. It's targeted misuse-disproportionately aimed at women and girls-that erodes dignity, safety, and trust online. It also tests the state's ability to convert statute into timely, operational protection.

What victims and experts are saying

Philosophy lecturer Dr Daisy Dixon says non-consensual images of her escalated after she spoke publicly about Grok and the new law. "Being dressed in certain ways and being put in certain sexualised positions against our will… it feels like someone's hijacked your sense of self." She links the backlash to "stoking the fires of extreme misogyny."

Davies welcomes the law coming into force but says victims "have been waiting" and that faster action would have reduced harm. "Think of how many victims are out there already."

Government signals

The prime minister warned X it could lose the "right to self regulate" if it cannot control Grok. The science, innovation and technology secretary called non-consensual intimate deepfakes an "affront to decent society" and said they are illegal.

Policy gaps exposed

  • Commencement lag: Legislation was ready but inactive, leaving a window for harm.
  • Platform gatekeeping: Paywalling an unsafe feature is not safety; controls must block generation and sharing at source.
  • Anonymity and virality: Offenders can act at scale with low friction; victims face slow, fragmented reporting.
  • Guardrails and auditing: Insufficient pre-release red-teaming, weak content filters, and unclear kill-switches.

Immediate actions for departments and regulators

  • Rapid commencement and charging: Issue operational guidance to police and CPS on the new offence, with clear evidential standards and charging thresholds.
  • Ofcom directions: Publish interim expectations for AI image tools (prohibited prompts, default off for sexual image edits, traceable audit logs, and human review on edge cases).
  • Safety-by-default: Require platforms to prevent creation and re-upload of flagged content using hashing and model-level blocks, not just paywalls.
  • Enforcement triggers: Define measurable thresholds (volume of illegal outputs, takedown speed, re-upload rates) that prompt sanctions or loss of self-regulatory status.
  • Single reporting portal: Stand up a national intake with victim-first design, evidence preservation, and guaranteed SLAs for takedowns.
  • Data access: Establish MoUs with X for rapid disclosure on suspected offences, with privacy safeguards and clear legal bases.
  • Procurement levers: Any public-sector use of generative models must include safety evals, documented red-team results, and live abuse-prevention metrics.
  • Watermarking and provenance: Mandate content provenance for public-sector images; promote interoperable standards with industry.
  • Victim support: Fund specialist services and legal aid for image abuse cases; ensure trauma-informed police training.

What X should be required to demonstrate

  • Model-level blocks for sexualised edits of real people and minors, including prompt and output filters.
  • Hashing, content matching, and proactive detection to prevent re-uploads across site and app.
  • Transparent reporting: volumes, takedown latency, false negatives, appeals, and independent audits.
  • A functioning kill-switch for unsafe features when abuse spikes.

Bottom line

Laws on paper don't protect anyone until they bite. The test now is execution: fast guidance, firm enforcement, and platform changes that actually prevent creation and spread-before victims have to beg for takedowns.

Useful references


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide