UK MPs demand swift ban on AI nudification tools, urge crackdown on loopholes used by Grok

MPs urge a faster, broader ban on AI nudification after Grok churned out thousands of sexualized images and Ofcom opened a probe into X. Ministers promise action, but gaps remain.

Categorized in: AI News Government
Published on: Jan 15, 2026
UK MPs demand swift ban on AI nudification tools, urge crackdown on loopholes used by Grok

UK urged to move faster on AI nudification ban as MPs warn of loopholes

The Science, Innovation and Technology Committee has pressed the UK government to accelerate its ban on AI nudification tools and close gaps that let multi-purpose platforms generate abusive images. The intervention follows reports that Grok, xAI's chatbot, produced thousands of sexualized images per hour after users prompted it with real photos.

Ofcom has opened a formal investigation into X (formerly Twitter) under the Online Safety Act after the platform, acquired by xAI in March 2025, allowed Grok's nudity features to remain available to paying users. Ministers say new legislation will be tabled urgently, but the committee says the plan still leaves major risks on the table.

What happened

Between January 5 and 6, Grok reportedly generated around 6,700 sexualized images every hour. Some images were alleged to depict minors. UK regulators launched a probe, and pressure mounted on the government to hold X to account.

In a letter to Dame Chi Onwurah, technology minister Liz Kendall said restricting access to paying users "effectively monetiz[es] this horrific crime." She argued the Online Safety Act already gives Ofcom the mandate to act, including seeking court orders to block non-compliant services in the UK.

The government also plans to ban nudification tools via amendments to the Crime and Policing Bill. Dame Onwurah welcomed progress but asked why the ban took so long given reports emerging in August 2025, and questioned whether the measure would cover multi-purpose tools like Grok.

Where the policy gaps are

  • Scope risk: A ban that targets only single-purpose "nudification apps" may miss multi-purpose AI systems that can produce the same content.
  • Timeliness: There's a lag between public harm and action. The committee flagged slow movement despite months of warnings.
  • Platform accountability: Earlier recommendations to explicitly regulate generative AI and increase duties on platforms like X and tools like Grok were not adopted.
  • Principles: Responsibility and transparency are not yet embedded strongly enough across the online safety regime.

What government teams can do now

  • Legislative scope: Define "nudification" in a tech-neutral way that covers single-purpose apps and multi-purpose AI models, APIs, and integrated toolchains.
  • OS Act duties: Direct Ofcom to prioritize enforcement on intimate image abuse; require risk assessments, default-on safety features, and rapid takedown SLAs.
  • Provenance and detection: Mandate content provenance signals or watermarking for AI image tools and require platforms to detect and flag manipulated intimate content.
  • Access controls: Require KYC for paid model access tied to high-risk image features; implement rate limits and abuse-triggered shutdowns.
  • Child safety: Enforce age assurance where image generation features exist; require strict filters against any sexualized depiction of minors.
  • Reporting and evidence: Standardize victim reporting flows, evidence preservation for law enforcement, and clear appeal routes.
  • Penalties: Set meaningful fines for failures and maintain readiness to seek court-ordered blocking for non-compliance.
  • Procurement levers: Exclude non-compliant AI vendors from public-sector procurement until they meet safety and transparency thresholds.
  • Coordination: Align DSIT, Ofcom, Home Office, MoJ, NCA, and CPS on definitions, evidence standards, and cross-border cooperation.

Implementation timeline (practical)

  • Next 30 days: Publish draft ban language covering multi-purpose tools. Issue interim Ofcom guidance on intimate image abuse and platform duties.
  • Next 90 days: Require high-risk services to ship provenance markers, strengthen safety filters, and stand up rapid takedown processes.
  • Next 180 days: Begin targeted audits, apply fines where warranted, and prepare blocking applications for persistent non-compliance.

Why this matters for public bodies

Victims are often left to chase takedowns across multiple services while content spreads. Delays compound harm and erode trust in the online safety regime. Closing scope gaps and enforcing existing powers is the fastest path to reduce abuse now.

Public-sector teams can also lead by example: set clear procurement standards, publish model risk assessments for any AI tools you deploy, and coordinate with local police units to streamline reporting and evidence handling.

Useful references

Upskilling for policy and enforcement teams

If your unit is building capability on AI risk, safety tooling, and governance, see curated options by job role: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide