Indonesia Develops AI Guidelines to Combat Disinformation and Digital Threats

Indonesia's Ministry of Communication and Digital is finalizing AI risk management guidelines focusing on preventing disinformation. Public consultation ends August 29, 2025, before formal approval.

Categorized in: AI News Government
Published on: Sep 06, 2025
Indonesia Develops AI Guidelines to Combat Disinformation and Digital Threats

Indonesia's Ministry of Communication and Digital Prepares AI Risk Management Guidelines

The Ministry of Communication and Digital in Indonesia is developing new guidelines to manage the risks associated with artificial intelligence (AI), with a strong focus on curbing the spread of false information. These guidelines aim to assist AI developers in implementing necessary precautions during system development.

Aju Widya Sari, Director of Artificial Intelligence and New Technology Ecosystems at the Ministry, explained that these guidelines will serve as a framework from which various sectors can develop their own specific rules.

Upcoming Presidential Regulation and Public Consultation

The Ministry is finalizing the guidelines and has requested approval to formalize them as a Presidential Regulation. This process is expected to coincide with the conclusion of public consultations scheduled for August 29, 2025.

Addressing Disinformation as a Key Risk

The draft guidelines emphasize disinformation—including fake news and deepfakes—as a significant threat. AI's ability to create realistic but false content presents risks such as misuse and potential harm to democratic processes through the spread of misleading information.

To combat this, the Ministry has integrated disinformation prevention into its Quick Wins program. This fast-track initiative demonstrates practical and responsible AI applications, with disinformation prevention highlighted as a primary use case led by the Ministry.

Government's Stance on Digital Threats

The government groups disinformation with slander and hate speech under the category "DFK," recognizing these as serious digital threats. According to official Ministry data, over 1.4 million pieces of harmful content—including disinformation—were managed between January and August 2025.

  • Guidelines aim to help developers build safer AI systems.
  • Presidential Regulation expected after public input closes in August 2025.
  • Disinformation prevention is a central focus, integrated into the Quick Wins program.
  • Government actively monitors and handles harmful digital content.

For government officials and departments involved in digital governance and AI policy, staying informed on these developments will be crucial. Responsible AI deployment and effective management of digital misinformation remain priorities.