Indonesia Delays AI Regulation to Early 2026: What Government Agencies Should Do Now
Indonesia's first comprehensive AI regulation is now slated for early 2026. The upcoming presidential regulation will set direction on ethics, safety, and security, while detailed enforcement will sit with sector-specific rules drafted by ministries and agencies.
Until those rules are issued, ministries and agencies are urged to hold off on deploying AI internally. Penalties for misuse will continue to rely on existing laws, notably the Electronic Information and Transactions (ITE) Law and the Personal Data Protection (PDP) Law.
What the 2026 AI Regulation Will Emphasize
- Ethics, safety, and security as the foundation for all AI use in government and industry.
- Sectoral guidelines: each ministry/agency will be responsible for its own implementing rules after the presidential regulation is published.
- No new penalties in the presidential regulation; enforcement will refer to existing laws (e.g., ITE and PDP).
- Heightened need for PDP Law implementation, including the supervisory body and derivative regulations, to make oversight workable.
Why This Matters for Government Teams
AI pilots that move ahead without internal rules risk compliance gaps, public backlash, and project stalls. Teams that prepare their governance now will be able to launch faster once the regulation lands.
Action Plan for the Next 90-180 Days
1) Assign ownership
Appoint an accountable executive and an AI working group (policy, legal, IT/security, data, procurement, program owners). Define decision rights and reporting lines.
2) Draft a clear internal AI policy
Set approvals for AI pilots, define prohibited uses, require a register of AI systems, and mandate human-in-the-loop for high-impact decisions. Include record-keeping and audit requirements.
3) Data and privacy readiness
Map datasets and data flows for each AI use case. Complete risk assessments, lawful basis checks, consent or notice where required, retention limits, de-identification, and cross-border data provisions aligned with the PDP Law.
4) Security and safety controls
Implement access controls, encryption, logging, model/input/output monitoring, abuse detection, and incident response. Require pre-deployment red-teaming for high-risk use cases and content provenance or labeling for AI-generated outputs that reach the public.
5) Model and vendor risk management
Classify systems by impact (low/medium/high). For vendors, require security attestations, data-use limits, confidentiality, IP protections, model cards or documentation, evaluation results, service levels, and exit plans. For open-source models, document versioning and patching.
6) Evaluation and fairness
Define accuracy, bias, safety, and reliability metrics per use case. Test with representative data (including edge cases), publish evaluation summaries internally, and schedule periodic re-testing.
7) Procurement language
Add clauses on data residency (where applicable), human oversight, transparency, audit rights, incident reporting timelines, and compliance with PDP/ITE and future sectoral rules. Prefer modular contracts that can be updated when the presidential regulation is issued.
8) Public transparency
Prepare simple notices for citizen-facing services that use AI. Provide an appeal path to a human, explain data sources, and publish contact points for complaints.
9) Budget and infrastructure
Estimate compute, storage, and networking needs for pilot use cases. Prepare proposals that could align with the planned sovereign AI fund and any future fiscal incentives.
10) Capability building
Roll out short, role-specific training for program owners, data stewards, legal/compliance, and leadership. For curated options by job function, see Complete AI Training: Courses by Job.
Priority Use Cases Cited for Quick Wins
- Monitoring the free nutritious meal program (verification, fraud detection, and logistics oversight).
- Crop yield forecasting to support food self-sufficiency.
- Financial reporting analytics for the Red-White cooperative initiative.
For each, define the problem, outcomes, datasets, controls, and a small pilot with clear success thresholds. Keep humans in the loop for decisions that affect benefits or eligibility.
Legal Levers You Can Rely on Today
Enforcement for AI misuse currently ties back to existing laws. Review and align with the PDP Law and the ITE Law for data handling, security, and electronic information practices.
Also leverage existing procurement, records management, and security policies while you wait for sectoral AI rules.
Investment, Growth, and Talent
Analysis suggests AI could add up to US$140 billion to GDP by 2030, with an estimated US$3.2 billion needed for compute and about US$968 million to train 400,000 digital talents. Agencies can get ahead by mapping workforce skill gaps and prioritizing training for roles closest to near-term use cases.
Aim for practical, project-based learning anchored to your pilot portfolio. Pair training with playbooks, checklists, and example templates to speed adoption without creating risk.
What to Watch Next
- Release of the presidential regulation in early 2026.
- Derivative rules and guidance from each ministry/agency following the regulation.
- PDP Law implementing regulations and the supervisory body's stand-up, which will affect oversight timelines.
- Design and timing of the proposed sovereign AI fund (planned 2027-2029) and any new fiscal incentives.
Bottom Line for Government Teams
Wait for formal approval before full-scale deployment-but don't wait to prepare. Set governance, lock in privacy and security basics, and run controlled pilots with strong oversight. The teams that do the groundwork now will move first when the regulation lands.
Your membership also unlocks: