UK urges Elon Musk to stop Grok-fueled deepfakes on X
UK Technology Secretary Liz Kendall has called on Elon Musk to block the use of Grok-the AI tool built into X-from generating fake, sexualised images of women and girls. She called the images "absolutely appalling" and pressed the company to act fast.
Her message follows multiple reports of users being targeted by Grok-generated deepfakes, including manipulated images of a 14-year-old actor. The spike began after a late-December update that let users upload photos and request AI edits.
Ofcom said it has made "urgent contact" with X and is investigating concerns that Grok is creating undressed images. X has since warned users not to generate illegal content, including child sexual abuse material.
Why this matters for government
- Harm prevention: AI-edited sexual images can traumatize victims, especially minors, and fuel online abuse at scale.
- Legal risk: Platforms operating in the UK face duties under the Online Safety Act; failure to prevent illegal content has consequences.
- Public trust: Visible inaction erodes confidence in digital services, law enforcement, and regulators.
- Cross-border enforcement: Coordinated action is required when content, platforms, and victims are in different jurisdictions.
Immediate steps for departments and regulators
- Set clear deadlines: Require X to disable or heavily restrict Grok's image-editing feature until effective safeguards are verified.
- Define safety baselines: Mandate filters that block sexual content involving minors, nudity synthesis, and face swaps without consent.
- Demand incident reporting: Require rapid disclosure of abuse volumes, detection efficacy, and response times.
- Victim support: Ensure fast takedown pathways, evidence preservation, and direct escalation channels for law enforcement.
- Procurement leverage: Tie public-sector ad spend and platform partnerships to compliance with safety-by-design standards.
- Public guidance: Coordinate clear messaging for schools, parents, and local authorities on reporting and support.
What X should implement now (and what government can test for)
- Default-off for risky features: Disable image edits for accounts lacking strong history or verification; require explicit opt-in with warnings.
- Proactive blocking: Filter prompts and outputs for sexual content, minors, nudity synthesis, and face/identity manipulation.
- Friction and review: Add rate limits, cooldowns, and human review for flagged edits; block repeat offenders.
- Image provenance: Use watermarking and content credentials; reject uploads with signs of child imagery or known abuse hashes.
- Auditability: Provide Ofcom with logs on model guardrails, red-team results, and real-time metrics on prevented/allowed edits.
Policy moves to consider next
- Standardized safety tests for generative image tools before public release.
- Clear timelines for takedowns and victim notifications, with penalties for non-compliance.
- Right of recourse: Faster legal avenues for victims to force removal and seek damages.
- Transparency codes: Regular, independent audits of AI safety controls and abuse stats.
- Training and capacity: Fund specialized units in law enforcement and regulators for AI-enabled abuse investigations.
Kendall's stance is direct: "No one should have to go through the ordeal of seeing intimate deepfakes of themselves online⦠X needs to deal with this urgently." This moment is a test of platform responsibility and the effectiveness of UK oversight.
Context and resources: Ofcom's online safety duties are outlined here: Ofcom: Online Safety. The statutory framework is set by the Online Safety Act.
If your department is standing up AI risk training or policy skills, you can explore practical courses here: Complete AI Training - Courses by Job.
Your membership also unlocks: