Indonesia pushes AI accountability as deepfake scams cost Rp700 billion

Indonesia urges transparent, accountable AI as deepfakes drive Rp700b in losses. A National AI Roadmap and tighter rules across schools, government, and finance are next.

Categorized in: AI News Education Government Legal
Published on: Oct 24, 2025
Indonesia pushes AI accountability as deepfake scams cost Rp700 billion

Indonesia Calls for Transparent, Accountable AI: What Leaders Need to Do Now

Indonesia's Ministry of Communication and Digital Affairs is calling on AI developers to build with transparency and accountability. The push comes as unethical AI use, especially deepfakes, continues to hurt people and institutions. Losses tied to AI-enabled fraud have reached Rp700 billion (about US$42 million), with more cases expected without stronger safeguards.

A National AI Roadmap is in the works to set clearer rules and require accountability from AI builders. Until it's finalized, enforcement relies on existing laws, including the Electronic Information and Transactions Law and the Personal Data Protection Law, along with the Criminal Code. See references: ITE Law (Amendment: UU 19/2016), Personal Data Protection Law (UU 27/2022).

The ministry is also scaling digital literacy programs to reduce victimization from AI-based scams. For public institutions, this is a signal to move faster on governance, training, and technical controls.

What this means for education, government, and legal leaders

  • Require model and system documentation: purpose, data sources, limitations, and known failure modes.
  • Log data provenance and consent. Avoid training on data without a legal basis; map data flows end-to-end.
  • Label synthetic media and implement watermarking where feasible. Set review processes for high-risk content.
  • Run bias testing, red-teaming, and safety evaluations before deployment and at set intervals.
  • Conduct Data Protection Impact Assessments for high-risk use cases. Keep audit trails and incident response playbooks.
  • Ensure human oversight and clear appeal channels for affected users and citizens.
  • Include AI risk clauses in procurement and vendor contracts; audit third-party models and APIs.

Roadmap focus and priority sectors

The National AI Roadmap will promote responsible AI adoption in health, education, finance, and transportation. It will set principles for use, including accountability, transparency, and respect for copyright.

Government priority sectors for AI development include:

  • Health
  • Digital talent education
  • Bureaucratic reform
  • Smart city development
  • Food security

Immediate actions to reduce risk

  • Audit current AI tools and content pipelines for deepfake exposure and impersonation risks.
  • Verify media sources in public communications; set takedown and reporting procedures.
  • Update policy: define acceptable use, data handling, copyright checks, and escalation paths.
  • Train staff to spot AI-driven fraud and misinformation; run simulations and tabletop exercises.
  • Coordinate with law enforcement on evidence handling and reporting under ITE and PDPL.
  • Budget for monitoring, content authenticity verification, and model governance tooling.
  • Track the National AI Roadmap and prepare to align internal policies once published.

Building internal capacity is essential. For structured upskilling by role, see AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)