Government demands X stop Grok deepfakes of women and children as Ofcom launches urgent probe

UK demands action from X after claims Grok made sexualised deepfakes of women and minors. Ofcom is moving fast under the Online Safety Act; fines or feature limits could follow.

Categorized in: AI News Government
Published on: Jan 09, 2026
Government demands X stop Grok deepfakes of women and children as Ofcom launches urgent probe

Government demands swift action from X over 'appalling' Grok AI deepfakes

The government has called on X to act immediately after reports that its AI assistant, Grok, generated sexualised images of women and children without consent. Technology Secretary Liz Kendall called the situation "absolutely appalling," stressing that platforms have a legal duty to prevent and remove this content.

The intervention follows urgent contact from Ofcom with X and xAI. The UK now joins France, Malaysia and India in pressing for action on AI-enabled abuse.

Regulator steps in

Ofcom said it will "undertake a swift assessment" to determine whether there are compliance issues that warrant investigation. Kendall backed Ofcom's approach and any enforcement action it deems necessary.

She was clear: this is about enforcing the law, not restricting speech. "Services and operators have a clear obligation to act appropriately."

What changed on X

Grok can be tagged in posts to respond to prompts, with some premium features. Users have reportedly asked it to alter real photos and place women into sexualised scenarios; the BBC says it has seen multiple examples. The Internet Watch Foundation has identified "criminal imagery" of underage girls apparently created via Grok.

Concerns intensified after X introduced an "Edit Image" button that lets users alter images via text prompts, even if they didn't upload the original and without the subject's consent. Several women described the experience as dehumanising and frightening.

Legal position: clear and enforceable

The Online Safety Act makes it illegal to create or share intimate or sexually explicit images without consent, including AI-generated material. Platforms must limit exposure to such content and remove it quickly once identified. Intimate image abuse and cyberflashing are priority offences under the law.

Statements from key players

Liz Kendall: "We cannot and will not allow the proliferation of these images... It is absolutely right that Ofcom is looking into this as a matter of urgency."

Ofcom: "Based on their response we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation."

X: "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."

Sir Ed Davey urged swift government action and suggested reduced access to X if concerns are substantiated: "People like Elon Musk have to be held to account."

Human impact

Dr Daisy Dixon said users took everyday photos she posted and asked Grok to sexualise them, leaving her feeling "humiliated" and fearful for her safety. She welcomed the intervention but questioned X's accountability and said she is now afraid to open the app.

What officials should watch for now

  • Evidence of proactive content moderation: default blocks on sexualised edits of real people, robust detection of child sexual abuse material, tight rate limits on high-risk features like "Edit Image."
  • Transparency from X/xAI: policies, model guardrails, audit logs, and rapid takedown workflows; time-to-removal metrics and user reporting response times.
  • Effective age assurance and prompt safety controls that prevent sexualisation of identifiable individuals.
  • Clear appeal pathways for victims; preservation of evidence for law enforcement.
  • Regular risk assessments tied to product launches, especially features that modify real images without consent.

Immediate actions for departments and regulators

  • Coordinate with Ofcom on timelines, escalation criteria, and enforcement routes; prepare for potential investigation support.
  • Request technical detail from X/xAI: model safeguards, red-teaming results, and thresholds for automated blocking.
  • Ensure victim support channels are clearly signposted; establish rapid referral pathways to police where criminality is suspected.
  • Develop guidance for public sector staff on reporting impersonation or deepfake abuse tied to official roles.
  • Engage international counterparts (France, Malaysia, India) for aligned standards and data-sharing on cross-border abuse.
  • Assess proportionate interim measures if risk persists, including feature restrictions or other remedies within legal powers.

What happens next

Expect a quick regulatory assessment focused on compliance with the Online Safety Act and the platform's duty of care. If breaches are found, enforcement could include fines or directions to change features and processes.

The core principle stands: this is not about limiting free expression. It's about upholding the law and protecting people from abuse enabled by AI tools.

Training and capability-building

For teams building practical skills in AI risk, prompt safety, and content controls, see public-sector friendly options at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide