Malaysia and Indonesia Block Grok Over Non-Consensual Sexual Deepfakes: Key Takeaways for Public Officials
Indonesia and Malaysia have temporarily blocked access to Grok, the A.I. chatbot from xAI, after a surge of sexually explicit, non-consensual images of real people appeared on X. Both governments moved over the weekend, making them the first countries to formally ban the application.
"The government views the practice of non-consensual sexual deepfakes as a serious violation of human rights, dignity and the security of citizens in the digital space," said Indonesia's communications and digital affairs minister, Meutya Hafid.
Grok's image tool had been used to generate sexualized content of real individuals, including minors, according to multiple reports. xAI restricted image generation to paying X subscribers last week, but officials argue it simply puts a fee on harmful content rather than fixing the issue.
The pushback extends beyond Southeast Asia. Britain's prime minister, Keir Starmer, criticized the subscriber-only restriction as "not a solution," and U.S. Senators Ron Wyden, Ed Markey, and Ben Ray LΓΊjan urged Apple and Google to remove the app from their stores. Elon Musk has said users prompting sexually explicit images of children would face "consequences."
Both countries have a track record of firm action on online harms. Indonesia has previously blocked Pornhub and OnlyFans and briefly restricted TikTok in 2018 over child-safety concerns. Malaysia has proposed barring children under 16 from social media following high-profile bullying cases.
Why this matters for governments
- Legal exposure: Non-consensual sexual imagery can violate criminal, child-protection, privacy, and harassment laws. Cross-border sharing complicates enforcement and evidence chains.
- Platform accountability: App store and model providers can be pressured-or required-to gate, log, and block unsafe capabilities by default.
- Signal to market: Rapid regulatory action sets expectations for safety-by-default design in generative tools.
Immediate actions for regulators and public agencies
- Issue binding notices to platforms: Block prompts targeting real people; prohibit sexualized content; implement keyword/person-blocklists; throttle and quarantine flagged generations.
- Coordinate with app stores: Require removal, suspension, or conditional reinstatement tied to verifiable safety controls and audits.
- Mandate incident reporting: Time-bound disclosure of abuse rates, detection efficacy, and takedown times. Publish enforcement metrics.
- Protect minors: Enforce age assurance, default safe modes for youth, and school network/device blocklists for risky A.I. features.
- Support victims: Fast-track removal orders; preserve evidence; provide legal and mental health resources.
- Cross-border cooperation: Use MLATs and regional hotlines to speed evidence requests and offender identification.
Technical and governance controls A.I. providers should implement now
- Hard blocks on generating images of real people without verified consent; default denials for any sexual content.
- Robust prompt and output filters (names, faces, minors, celebrities), plus automated and human review for high-risk prompts.
- Provenance and watermarking: Embed and verify C2PA-style metadata; hash and share abusive samples for platform-wide blocking.
- Abuse rate-limits: Session-level caps, friction for sensitive categories, and dynamic risk scoring.
- Red-teaming and third-party audits focused on sexual content, minors, and non-consensual imagery; publish safety evaluation summaries.
- Comprehensive logging and user accountability: Verified accounts for image tools, rapid ban/appeal flows, and law-enforcement escalation paths.
Policy options under consideration or already in play
- Liability shifts: Duty-of-care standards that place responsibility on platforms for foreseeable harms from unsafe features.
- Transparency mandates: Regular public reporting on abuse prevalence, model updates, and guardrail effectiveness.
- Conditional access: App store rules requiring consent verification for real-person image generation.
- Procurement levers: Require A.I. safety certifications and independent audits for any tool used in public institutions.
Guidance for schools and public institutions
- Blocklists on networks and managed devices for high-risk A.I. image tools; monitor DNS and app installations.
- Staff training on deepfake identification, reporting protocols, and evidence preservation.
- Clear victim-support workflows: Immediate takedown requests, liaison with platforms, and counseling referrals.
What to expect next
More governments will likely move from warnings to enforceable orders. App store gatekeeping will become a primary lever, and "subscriber-only" access won't pass as a safety fix. The baseline is simple: if an A.I. tool can produce non-consensual sexual images, it must be gated, logged, and audited-or it will be blocked.
Further reading
Upskilling your team
If you're standing up A.I. governance, safety, or policy functions, targeted training can accelerate the basics-risk controls, audits, and incident response.
Your membership also unlocks: