UK Warns X: Stop Grok's Indecent AI Images or Face a Ban

X faces a possible UK block after a surge of indecent AI images; Ofcom has fast-tracked its probe. Government teams should protect people, review channels, and tighten response.

Categorized in: AI News Government
Published on: Jan 10, 2026
UK Warns X: Stop Grok's Indecent AI Images or Face a Ban

X faces UK action over indecent AI images: what government teams need to know now

X has been warned it could be blocked in the UK after a surge of indecent AI-generated images appeared on the platform. Ofcom has accelerated its investigation, citing urgent concerns over content featuring women and children produced via the Grok AI tool.

In response, X restricted Grok's image generation and editing to paying subscribers. Critics - including victims, ministers and experts - say that move is inadequate and risks turning harmful capabilities into a premium feature.

What happened

  • Ofcom contacted X with a firm deadline to explain how it will curb illegal and harmful content and is now fast-tracking its assessment.
  • X limited Grok's image tools to paying users who provide personal details, arguing abusers can be identified. The volume and nature of the images have prompted public and political backlash.
  • Victims report ongoing harm, including manipulated images that create sexualised depictions without consent. Some women's organisations and public figures have quit the platform.
  • Ministers signalled that, if X does not comply with UK law, Ofcom could use "backstop" powers under the Online Safety Act to block access.
  • Researchers say Grok can still generate sexualised images in certain contexts, including within the app environment where content isn't instantly public.

Why this matters for government

This is a live online-safety, safeguarding and public-trust issue. Departments, agencies and public bodies must protect staff, victims and service users while ensuring communications do not amplify harm or normalise abuse.

The episode also tests the enforcement model of the Online Safety Act: whether swift, credible regulatory pressure changes platform behaviour at scale.

Key facts

  • X has an estimated 300 million monthly users (Similarweb).
  • Paying subscribers are estimated at 2.2-2.6 million (Appfigures).
  • Grok has been used to alter images of women and children, including clothing removal and sexualised poses.

Regulatory levers in play

  • Ofcom can require risk assessments, safety-by-design measures and rapid removal of illegal content under the Online Safety Act.
  • Non-compliance can trigger significant fines and, if necessary, court orders for network-level blocking in the UK.
  • Criminal investigations may follow where individual offences are suspected, including the creation and distribution of unlawful images.

For reference: Ofcom's online safety duties and the Online Safety Act.

Immediate actions for departments and agencies

  • Safeguarding: Issue an all-staff advisory covering risks, reporting routes, and steps to preserve evidence if targeted.
  • Communications: Review reliance on X; prepare contingency channels (e.g., GOV.UK, email lists, other platforms) and update content schedules.
  • Victim support: Provide dedicated contact points for staff and service users who are depicted in manipulated images; coordinate with police where appropriate.
  • Risk assessment: Re-run platform risk assessments and document mitigations. Prioritise harms to women and children and risks to protected groups.
  • Procurement/advertising: Consider pausing paid activity on X until safety controls meet legal and ethical expectations. Add contractual clauses requiring AI-safety controls when relevant.
  • Incident response: Set a 24/7 triage path for takedowns, legal escalation, and press handling. Maintain an audit trail of platform responses and timelines.
  • Data and privacy: Ensure handling of unlawful images follows strict access, storage, and deletion protocols to avoid secondary harm.

Policy options being discussed

  • Default-on safety settings and stricter guardrails for image generation, with clear refusal behavior for sexualised prompts.
  • Identity, traceability and rate limits for any generative functions that pose abuse risks.
  • Independent audits, transparency reports, and rapid user-facing reporting tools for manipulated imagery.
  • Faster takedowns, with measurable service-level targets and penalties for failure.

Risks to anticipate

  • Re-emergence of harmful content via workarounds, third-party clients or within-app generation not publicly visible.
  • Victim identification and harassment following viral spread, even after takedown.
  • Staff exposure to harmful content while monitoring; rotate duties and provide wellbeing support.
  • Legal risk if official accounts inadvertently engage with or amplify manipulated content.

What to watch next

  • Ofcom's near-term update on its expedited assessment and any enforcement steps.
  • Whether X introduces stronger technical guardrails and transparent metrics on detection and removal.
  • Cross-government guidance on social-media use pending compliance improvements.
  • Potential law enforcement activity and civil claims from victims.

Practical checklist for government teams

  • Issue internal comms today: risks, reporting, mental health resources.
  • Map critical comms reliant on X; set backups and migration paths.
  • Nominate a takedown lead and legal contact; test the process on a sample case.
  • Log all incidents, timestamps, platform responses, and outcomes for accountability.
  • Review training gaps for press officers, safeguarding leads, and moderators handling AI-generated content.

Training and resources

If your team needs rapid upskilling on AI risk, governance and safety-by-design concepts, see this curated directory of role-based programs: AI courses by job. Use alongside internal policies and statutory guidance, not as a substitute.

This situation is moving quickly. Keep your response simple: protect people first, document everything, and be ready to pivot your channel strategy while regulators decide the next steps.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide