Nepal Rastra Bank unveils draft AI guidelines to tighten oversight of banks and protect customers

NRB's draft AI rules let banks use AI for credit, fraud, and support while enforcing transparency, privacy, and risk control. Feedback is open; expect stricter oversight.

Published on: Dec 07, 2025
Nepal Rastra Bank unveils draft AI guidelines to tighten oversight of banks and protect customers

Nepal Rastra Bank Drafts AI Guidelines to Tighten Oversight and Improve Service

Nepal Rastra Bank (NRB) has released a draft of new Artificial Intelligence Guidelines for public comment. The goal is simple: let banks and financial institutions use AI to improve efficiency and customer experience without sacrificing stability, fairness, or security.

The draft sets expectations for governance, transparency, privacy, and risk controls across AI use cases like credit scoring, fraud detection, customer support, risk management, and compliance.

Who this covers

  • Commercial banks and development banks
  • Finance companies and microfinance institutions
  • Nepal Infrastructure Bank
  • Payment System Operators (PSOs) and Payment Service Providers (PSPs)
  • Other institutions licensed by NRB (LIs)

Core goals of the guidance

  • Adopt AI in ways that improve efficiency, innovation, and customer experience while protecting financial stability and integrity.
  • Ensure AI is transparent, explainable, fair, and accountable; uphold customer rights and data privacy; avoid discriminatory or inaccurate outcomes.
  • Identify and manage AI-related risks: operational, ethical, systemic, model, and cyber.
  • Put clear governance in place so boards and management can oversee AI responsibly.
  • Expand safe, inclusive access to affordable financial services through AI.

AI use cases in scope

  • Credit decisioning and risk scoring
  • Fraud and AML anomaly detection
  • Customer service (chat, voice, email), including agent-assist
  • Enterprise risk management and early warning systems
  • Compliance monitoring and reporting

What this means for finance, insurance, and customer support teams

Expect higher expectations for documentation, testing, and human oversight. If your team touches credit, fraud, claims, onboarding, or support, you'll need tighter controls and clearer customer communications.

  • Set AI governance: board accountability, risk owners, decision rights, and escalation paths.
  • Maintain an AI inventory with risk ratings, model purpose, data sources, and customer impact.
  • Validate models before and after deployment; monitor drift, data quality, and performance.
  • Test for fairness and accuracy; track metrics like disparate impact and false positives.
  • Provide explanations customers can understand; keep reason codes and adverse action notices ready.
  • Run data protection impact assessments; minimize data; enforce retention and deletion.
  • Secure model pipelines and APIs; apply least-privilege access; log everything meaningful.
  • Keep a human in the loop for high-impact decisions; define override rules.
  • Strengthen incident response for AI failures and cyber events, including notification steps.
  • Assess third-party and vendor models; require audits, data locality clarity, and exit plans.

Timeline and how to respond

NRB has invited feedback on the draft. The notice references a submission deadline as December 29 (the year in the draft appears inconsistent), so stakeholders should verify details on the official site and submit comments accordingly.

Visit Nepal Rastra Bank for the latest circular and submission instructions.

Quick prep checklist

  • Appoint an executive AI sponsor and a cross-functional AI risk committee.
  • Catalog every AI system in production and pilots; classify by customer impact and risk.
  • Write or update model documentation: objectives, data lineage, features, limits, and controls.
  • Define fairness metrics and acceptance thresholds; test pre- and post-deployment.
  • Create customer-facing AI disclosures and appeals processes.
  • Draft an AI incident playbook with roles, timelines, and regulator touchpoints.
  • Review contracts with AI vendors for audit rights, data usage, and termination.
  • Train support, risk, and compliance teams on AI policies and escalation.

Notes for PSOs and PSPs

  • Ensure real-time fraud models are explainable enough to act on and dispute quickly.
  • Balance security with customer experience; track false-positive rates and dispute SLAs.
  • Harden streaming data pipelines, model endpoints, and access tokens.

Notes for microfinance and development banks

  • Be careful with thin-file borrowers and alternative data; document what's used and why.
  • Use human review for edge cases; monitor for bias across regions and customer groups.

Why this matters for customer support

Support teams will be the first to explain AI decisions to customers. Prepare clear scripts, build an appeals path that leads to a human decision, and log outcomes so product and risk teams can improve the models.

Helpful resources

Build team capability

If you're standing up governance and tools for finance use cases, these curated options can help accelerate your work.

AI tools for finance: curated list


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)