Malaysia to take legal action against X over Grok AI abuse as global backlash builds

Malaysia plans legal action against X over Grok misuse tied to harmful sexual content, including alleged images of minors. Notices went unanswered; UK, France probe.

Categorized in: AI News Legal
Published on: Jan 14, 2026
Malaysia to take legal action against X over Grok AI abuse as global backlash builds

Malaysia moves to take legal action against X over Grok AI: a legal briefing

Malaysia's communications regulator said it will initiate legal action against X, citing user safety risks tied to Grok, the platform's generative AI feature. The regulator has identified misuse of Grok to create and distribute harmful content, including obscene, sexually explicit, indecent, grossly offensive, and non-consensual manipulated images.

The statement flagged alleged content involving women and minors as "of serious concern... Such conduct contravenes Malaysian law and undermines the entities' stated safety commitments." Notices were served to X and xAI to remove the content; authorities say no action has been taken to date.

Malaysia and Indonesia temporarily blocked Grok over the weekend. Separately, Britain's media regulator opened an investigation into X, and French officials have reported the company to prosecutors and regulators. xAI responded to a request for comment with "Legacy Media Lies." X has not responded publicly.

Probable legal hooks in Malaysia

Expect reliance on the Communications and Multimedia Act 1998 (CMA), including:

  • Section 233: improper use of network services/facilities to transmit indecent, obscene, or offensive content.
  • Section 211: prohibition on providing content that is indecent, obscene, false, menacing, or offensive with intent to annoy, abuse, threaten, or harass.
  • Section 263: directions to service providers to prevent the commission of offences, enabling blocking and takedown orders.

Other potential exposure points include Penal Code section 292 (obscene materials) and child protection offences, with authorities signaling heightened concern over non-consensual and minor-involved imagery. Evidence Act section 114A (presumption of publication) may appear in user identification efforts.

Platform liability and enforcement posture

X and xAI could be treated as content applications service providers operating in Malaysia by virtue of availability to local users. Failure to act on regulator notices increases risk of prosecution, fines, feature-level blocking, or broader service disruption.

Enforcement tools likely include DNS/IP blocking, app store pressure, and directions to intermediaries under s.263. Cross-border challenges will push authorities to focus on practical leverage points: local operations, advertisers, and technical access.

Global regulatory echo

The moves in the UK and France signal convergence on platform accountability for AI-enabled image generation and dissemination. For reference, see Malaysia's regulator at MCMC and UK guidance on online safety at Ofcom.

Key risks for X/xAI and comparable providers

  • Criminal exposure under CMA s.211/s.233 for content transmitted via the service, including user-generated deepfakes.
  • Failure-to-remove risk after receipt of specific notices; aggravating factor if minors are involved.
  • Orders compelling geoblocking, model-level filters, or feature suspension until adequate safeguards are implemented.
  • Discovery and data preservation demands tied to user identification and law enforcement cooperation.

Action checklist for counsel

  • Stand up a Malaysia-specific response plan: designate a point of contact, track deadlines on all MCMC directives, and log actions taken.
  • Implement proactive filters for sexual content and non-consensual manipulated images; hard-block imagery involving minors; document efficacy metrics.
  • Enable fast-track takedown workflows with auditable timestamps; aim for removal within hours, not days.
  • Geofence Grok features until risk controls are validated; consider rolling back public-generation tools that can output illegal content.
  • Issue litigation holds; preserve logs, prompts, outputs, model versioning, and moderation decisions for potential proceedings.
  • Assess ToS and product UX: explicit prohibitions, in-product warnings, friction for risky prompts, and repeat-offender bans.
  • Map data flows and processors to support lawful requests while minimizing unnecessary retention.
  • Coordinate with app stores and network partners to preempt escalatory blocks by showing credible compliance steps.

For brands and enterprises using Grok-like tools

  • Pause use cases that can plausibly produce non-consensual or sexualized images; implement prompt and output filters.
  • Update vendor risk assessments and DPAs; require documented safety controls and turnaround SLAs for takedowns.
  • Prepare for discovery and incident reporting obligations if harmful content surfaces through your channels.

For victim-side counsel in Malaysia

  • Move quickly on evidence preservation: URLs, hashes, timestamps, and any available metadata.
  • File complaints with MCMC and the police; consider CMA s.233, Penal Code s.292, and child-protection statutes where applicable.
  • Seek court orders for user identification and takedowns; leverage Evidence Act s.114A presumptions when appropriate.

Bottom line: Malaysian authorities are signaling low tolerance for AI-driven sexualized and non-consensual imagery. Platforms with generative features should be ready to prove rapid removal, strong filters, and jurisdiction-specific controls-or expect aggressive enforcement.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide