Malaysia Weighs Licensing AI Apps To Prevent CSAM
The government is studying a licensing system for artificial intelligence (AI) applications to stop the creation and spread of child sexual abuse materials (CSAM). The discussion surfaced at an Internet Safety Day event at Titiwangsa Lake Gardens, underscoring a push for safer digital services across the country.
Currently, AI apps are not licensed by the Malaysian Communications and Multimedia Commission (MCMC). Offences are tied to the sending and misuse of network services, including the abuse of network facilities to communicate grossly offensive content. Authorities say the tools exist to act, but formal licensing has not been implemented yet.
The Digital Ministry may draft new laws specific to AI, while the MCMC would likely handle licensing. Officials are reviewing whether enforcement agencies should provide findings or recommendations before any licence is issued.
A recent example shows how that could work in practice. Grok, an AI application integrated within X, was temporarily blocked by the MCMC on Jan 11 following claims it turned images of children and women into obscene content. The platform was ordered to introduce safeguards in line with local laws, and access was restored on Jan 23.
What A Licensing Regime Could Require
- Risk-based tiers: higher-risk AI models and features face stricter controls and more frequent checks.
- Mandatory safeguards: age protections, abuse detection, CSAM hash-matching, and proactive content filtering.
- Provenance and watermarking: label AI-generated media and support industry standards for traceability.
- Incident response: clear escalation paths, 24/7 takedown capability, and time-bound remediation.
- Independent testing: regular red-teaming and third-party audits to verify safety claims.
- Transparency: safety reports, safety-by-design documentation, and user-facing safety summaries.
- Accountability: a local responsible officer, audit trails, and cooperation with law enforcement.
- Data safeguards: strict handling of user data, retention limits, and protections against model abuse.
- API controls: rate limits, content filters, and usage monitoring for high-risk integrations.
Implications For Public Agencies And Vendors
- Procurement will change. Expect licence checks, safety attestations, and stricter contract clauses.
- Vendors should prepare safety documentation, incident playbooks, and evidence of safeguards.
- Agencies will need clear reporting channels for suspected abuse and faster escalation to enforcement.
- Cross-agency coordination matters: align with MCMC, law enforcement, and legal counsel on roles and timelines.
- Data governance should cover AI-specific risks: training data sources, model behavior, and user protections.
Practical Next Steps For Government Teams
- Inventory current and planned AI tools. Flag high-risk features (image generation, image editing, open uploads).
- Set interim guardrails: default safe modes, stricter filters, and limited access for pilot phases.
- Update acceptable-use policies to explicitly ban abusive prompts and automate enforcement where possible.
- Require vendors to document safeguards, testing results, and response times before deployment.
- Create a single inbox and on-call process for abuse reports, with SLAs for triage and takedown.
- Run staff training on recognizing abuse signals and reporting procedures.
Context And References
For regulatory updates and guidance, see the Malaysian Communications and Multimedia Commission (MCMC). For global awareness efforts, refer to Safer Internet Day.
If your agency is building AI literacy and governance capabilities, explore practical training options for public-sector roles at Complete AI Training - Courses by Job.
Your membership also unlocks: