OJK updates AI code of ethics for fintech: stronger rules on fairness, data, and resilience
Indonesia's Financial Services Authority (OJK) has updated its AI code of ethics to better control risk in financial services. The announcement came alongside the OECD Asia Forum on Digital Finance 2025 in Bali, with OECD backing after a formal review and input process.
The refresh zeroes in on new AI developments-especially generative AI-and tightens expectations around consumer protection, model and data reliability, financial inclusion, data protection, and cyber resilience. In plain terms: faster innovation is welcome, but governance has to keep pace.
What changed
- Explicit coverage of generative AI, with guidance to adjust controls as the tech advances.
- A new core requirement: fairness. This addition sits alongside the existing principles.
- Reaffirmed principles: based on Pancasila, beneficial, fair and just, accountable, transparent and explainable, and resilience and security.
- OECD supported the update through a review and input process.
Why this matters to finance leaders
AI is improving process efficiency and transaction speed across banking, payments, and insurance. Generative AI can also accelerate fraud detection, improve service quality, and personalize products.
But it introduces fresh risk: hallucinations, leakage of personal or sensitive data, and algorithmic bias that can distort underwriting or limit access. The new guidance pushes firms to treat these as first-order risk, not edge cases.
Action checklist for banks, fintechs, and insurers
- Governance and accountability: Assign a single owner for each AI use case, with board-level oversight. Maintain an inventory of models, purposes, and risk ratings.
- Model and data reliability: Track data lineage, consent, and quality. Monitor drift, stress-test against edge cases, and keep humans in the loop for high-impact decisions.
- Bias and fairness: Run pre- and post-deployment fairness tests (e.g., disparate impact). Set thresholds, escalation paths, and override rules when bias is detected.
- Consumer protection and transparency: Use plain-language disclosures, explain key factors in decisions, and provide appeal channels. For adverse decisions, document the rationale.
- Privacy and security: Apply data minimization, masking, and PII redaction. Use synthetic or segmented data for training. Tighten third-party and API controls.
- GenAI-specific controls: Test for hallucinations, off-topic responses, and prompt injection. Implement approved prompt libraries, content filters, and retrieval safeguards.
- Cyber resilience: Red-team AI endpoints, log all model interactions, and set rate limits. Validate vendor claims with evidence, not slides.
- Incident response: Define playbooks for model failure, data leakage, and bias incidents. Set timelines and channels for regulator and customer communications.
- Documentation: Maintain model cards, decision logs, and data dictionaries. Keep audit trails current and accessible.
- Inclusion: Track access and approval outcomes across segments. Offer alternatives for thin-file or underserved customers.
- Training and culture: Train product, risk, compliance, and frontline teams on the new rules and practical do's/don'ts.
What to do next (30-90 days)
- Run a gap assessment against the updated OJK principles-prioritize high-impact use cases (credit, fraud, claims, KYC).
- Stand up fairness and hallucination testing in your model validation workflow.
- Update vendor contracts to require explainability, security attestations, and bias reporting.
- Pilot transparency UX: clear disclosures, concise explanations, and appeal options.
- Brief the board and set measurable targets for model reliability, fairness, and incident readiness.
For reference, see the official pages from OJK and the OECD's work on digital finance here.
If your team is building or auditing AI in finance, this curated resource list can speed up your process: AI tools for finance.
Your membership also unlocks: