Grok deepfakes face UK crackdown as Ofcom probes and new law kicks in

Ofcom is probing Grok after reports of non-consensual and child deepfakes on X. A new UK offence will criminalise creating such images, putting platforms and tools on notice.

Categorized in: AI News Legal
Published on: Jan 13, 2026
Grok deepfakes face UK crackdown as Ofcom probes and new law kicks in

Grok AI Deepfakes: What Ofcom's Probe and a New Offence Mean for Legal Teams

Grok's image-generation features are under investigation after users publicly shared non-consensual sexual images created by the tool. Reports also suggest it generated sexualised images of children. That has moved the issue from "content moderation" into clear harm, with Ofcom now assessing potential breaches of UK online safety law.

The timing matters. The government says a new offence will be enforced this week making the creation of intimate, non-consensual images - including deepfakes - illegal. It also plans to amend a separate law moving through Parliament to target companies supplying tools designed to generate these images.

What Ofcom is examining

Ofcom is investigating whether Grok and its distribution on X breached duties under the Online Safety Act (OSA). The Act doesn't name AI tools, but it does impose obligations on services to mitigate illegal content and protect users from harm. If X is found in breach, Ofcom can seek fines up to 10% of worldwide revenue or £18m, whichever is greater, and in extremis apply to restrict access in the UK.

Expect this to move slower than politicians want. Ofcom has to run a careful process to avoid free speech challenges. As one campaigner put it: "AI undressing people in photos isn't free speech - it's abuse."

The legal shift this week: creation becomes a crime

Until now, UK law focused on sharing intimate images without consent. Creating them with AI was a grey zone. That changes with a new offence criminalising the creation of intimate, non-consensual images, including deepfakes.

The government also intends to amend a separate data-focused law in Parliament to prohibit companies from supplying tools designed to produce such images. This expands liability beyond users and platforms to certain tool providers.

Ofcom's Online Safety framework and government guidance on intimate image abuse provide useful baselines for scoping risk.

Key risks for platforms, AI vendors, and enterprise adopters

  • Platform liability (OSA): Services hosting or distributing AI outputs face enforcement if illegal content is accessible, even if a third-party model generates it.
  • User liability (new offence): Individuals who create deepfake sexual images can be prosecuted, regardless of whether the image is shared publicly.
  • Tool supplier exposure (planned amendment): Companies "supplying tools designed" to produce illegal intimate images may face new prohibitions. Legal tests will hinge on product design, guardrails, and foreseeable misuse.
  • Child protection: Any generation involving children triggers heightened criminal exposure and reporting duties; corporate policies must reflect zero-tolerance and rapid escalation.

Enforcement blind spots the market should expect

Public sharing on X made this case visible. Private generation and closed-group sharing will be harder to detect, which shifts enforcement weight onto proactive controls, traceability, and cooperation duties. Ofcom's powers are significant, but they rely on evidence that content was accessible to UK users or that services failed in their risk mitigation duties.

Expect heavier scrutiny of provenance, audit logs, and the adequacy of model guardrails. Where outputs are ephemeral, logging and retention policies will become a legal as well as security issue.

Proof, authenticity, and disputes

As deepfakes get closer to reality, victims face practical hurdles proving an image is synthetic. For litigation, that raises evidentiary challenges, chain-of-custody concerns, and a need for expert analysis. Companies should evaluate content authenticity standards and watermarking/metadata solutions (e.g., content provenance frameworks) to reduce dispute friction and support takedowns.

Cross-border and political blowback

Enforcement against a US-owned platform risks political friction. US officials have already signalled frustration at foreign regulation of American tech firms. UK regulators will need precise findings and clear jurisdictional hooks to withstand political and legal pushback.

Action list for in-house counsel and compliance

  • Map exposure: Identify where your products host, generate, transform, or distribute user images. Include employee use of external AI tools.
  • Update legal basis and policies: Add explicit prohibitions on creating or sharing intimate, non-consensual images. Align ToS, community standards, and enforcement playbooks to the new offence.
  • Guardrails and detection: Implement blocklists, nudity/CSAM classifiers, and prompt/output filters. Log prompts and outputs with jurisdiction tags and consent markers where lawful.
  • Provenance and traceability: Adopt watermarking or cryptographic provenance where feasible. Maintain retention policies to evidence due diligence without over-collecting.
  • Escalation and reporting: Define 24/7 workflows for CSAM, law enforcement liaison, and Ofcom inquiries. Pre-authorise emergency takedown actions.
  • Vendor risk: Add clauses requiring AI vendors to prevent generation of illegal intimate images, disclose safety measures, and cooperate with lawful investigations.
  • Jurisdiction strategy: Prepare for service restriction orders, geo-blocking requests, and cross-border data transfer constraints tied to investigations.
  • Training: Brief trust & safety, engineering, and legal ops on the new offence and evidentiary requirements for complaints and appeals.

What to watch next

  • Scope of the new offence: CPS guidance on thresholds, consent standards, and intent.
  • Tool supplier standard: How "designed to create" is interpreted for multi-purpose models.
  • Ofcom's enforcement playbook: Transparency reports, risk assessments, and remedy expectations for AI features embedded in social platforms.
  • Judicial review risk: Free speech arguments versus harm prevention, especially where tools claim general-purpose status.

The signal is clear: creation of intimate deepfakes becomes a criminal act, and platforms hosting or amplifying them face regulatory heat. Legal teams should lock in policies, provenance, and vendor controls now - before the first notice lands.

If your team needs practical upskilling on AI safety, policy, and tooling, see our curated programs by role: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide