Government demands urgent action from X as Grok AI is used to create sexualised deepfakes
Technology Secretary Liz Kendall has called on Elon Musk's X to immediately stop its Grok chatbot being used to generate non-consensual sexualised images of women and girls. Multiple examples on the platform show users asking Grok to "undress" people, place them in bikinis, or create sexual scenes without consent. Kendall called the situation "absolutely appalling," adding, "we cannot and will not allow the proliferation of these degrading images."
X said it takes action against illegal content, including Child Sexual Abuse Material, by removing it, permanently suspending accounts, and working with authorities. The company added: "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."
Regulatory pressure intensifies
Ofcom has made urgent contact with xAI and is investigating reports that Grok is producing "undressed images" of people. Kendall endorsed the regulator's intervention: "It is absolutely right that Ofcom is looking into this as a matter of urgency and it has my full backing to take any enforcement action it deems necessary."
Senior political voices are also pushing for tougher measures. Liberal Democrat leader Sir Ed Davey urged the government to act "very quickly," including the option to "reduce access" to X, and called for a potential National Crime Agency investigation if reports are confirmed.
In Europe, the sentiment is similar. Thomas Regnier, spokesman for tech sovereignty at the European Commission, said the issue is taken "very seriously" and stated: "The Wild West is over in Europe⦠All companies have the obligation to put their own house in order - and this starts by being responsible and removing illegal content that is being generated by your AI tool."
'Dehumanising' for victims
Grok is a free AI assistant on X, with paid features, that responds when tagged and can edit images uploaded by users. Women have reported finding sexualised versions of their everyday photos created without consent, calling the experience dehumanising.
Dr Daisy Dixon said seeing altered images of herself left her "shocked," "humiliated," and worried for her safety. She supports the government's stance but remains frustrated by X's responses to reports: "We are being sent inappropriate AI images/videos daily, but X continues to reply that there has been no violation of X rules."
What government teams should do now
- Coordinate with Ofcom and law enforcement: Ensure clear escalation paths to Ofcom for platform non-compliance and to the National Crime Agency where criminality may be involved.
- Strengthen reporting and victim support: Provide staff and citizens with simple reporting routes, fast triage, and access to counseling and legal guidance.
- Issue platform preservation requests: Where appropriate, seek preservation of offending content and metadata for investigation before takedown.
- Review internal policy and comms: Update social media and AI-use policies; prohibit AI image editing of real people without consent; prepare holding lines for emerging incidents.
- Procurement and vendor clauses: Require safety guardrails, abuse prevention, audit logs, and rapid takedown commitments in contracts with AI-enabled tools.
- Train frontline teams: Brief HR, safeguarding, and communications on indicators of intimate-image abuse, response steps, and evidence handling.
Legal context: what the law expects
Kendall made clear this is a legal issue, not a speech issue. Intimate image abuse and cyberflashing are priority offences under the Online Safety Act, including when images are AI-generated. Platforms must prevent such content and act swiftly to remove it.
Ofcom is already engaging with xAI. Further enforcement remains on the table if compliance falls short. Political support exists across parties for rapid action, up to and including access restrictions if needed.
Key references
Upskilling for public sector teams
If your department is building AI safety policies or needs hands-on training for compliance and guardrails, explore role-based learning paths here: Complete AI Training - Courses by job.
Your membership also unlocks: