Malaysia Considers Mandatory AI Labels to Tackle Deepfakes and Misinformation on Social Media

Malaysia plans mandatory labels for AI-generated content to curb misinformation online. New rules under the Online Safety Act may require tags like “AI-Generated” by year-end.

Categorized in: AI News Government
Published on: Jul 30, 2025
Malaysia Considers Mandatory AI Labels to Tackle Deepfakes and Misinformation on Social Media

Malaysia Considers Mandatory AI Content Labelling to Combat Misinformation

The Malaysian government is exploring the implementation of mandatory labels for content generated or altered using artificial intelligence (AI). This move aims to help social media users identify AI-related content and reduce the spread of misinformation online.

Communications Minister Datuk Fahmi Fadzil shared that labels like “AI-Generated” or “AI-Enhanced” are being considered as part of new regulations under the Online Safety Act (Act 866). The Malaysian Communications and Multimedia Commission (MCMC) is finalizing these rules, with an expectation that the act will be enforced by the end of the year.

Addressing Deepfake Concerns

During a parliamentary session, Fahmi responded to concerns about the increasing presence of deepfakes—manipulated videos or audio that misuse the likeness or voice of public figures. He emphasized that regulating generative AI technologies is an active topic both nationally and internationally.

He underscored the responsibility of social media platforms in preventing the unchecked spread of deepfake content, stating that platforms must take steps to monitor and control such material.

Investigation Procedures for Social Media Complaints

Regarding the process of handling complaints related to social media, Fahmi explained that investigations proceed under the Communications and Multimedia Act 1998 (Act 588). The procedure starts with a first information report (FIR) and may involve referral to the deputy public prosecutor.

In cases with initial evidence, authorities may issue a notice under Section 255 of Act 588 to the individual involved. Communication devices can be seized by MCMC or the police if linked to the alleged offence.

Fahmi assured that all investigations are conducted with transparency and fairness to ensure the proper application of the law.

Practical Implications for Government Agencies

  • Awareness and Compliance: Government bodies working with digital content should prepare for new regulations requiring clear labelling of AI-generated or enhanced materials.
  • Collaboration with Platforms: Agencies may need to coordinate with social media operators to monitor and control deepfake content.
  • Legal Preparedness: Understanding investigative procedures under Act 588 will be important for handling complaints and enforcement actions related to online content.

For professionals involved in AI and digital governance, staying updated on these regulations is crucial. Training resources on AI technology and its ethical use can help ensure compliance and effective management of AI-generated content. Explore relevant courses at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)