AI governance is now a core strategy for financial institutions
AI is no longer a side project for financial institutions. It sits in the middle of product design, risk, and operations-so the guardrails need to be as strong as the ambition.
Singapore's central bank has proposed guidelines on AI risk management that move the industry from "why" to "how." The message is simple: build clarity without handcuffs, scale with accountability, and stay ready for new risks as models and use cases advance.
What MAS expects: clarity with room to innovate
The proposed guidance builds on existing supervisory expectations-oversight, risk management, policies and frameworks, lifecycle controls, and capability-while addressing AI-specific risks. It's principles-based to avoid over-prescription, yet actionable enough to implement proportionately across institutions of different sizes and risk profiles.
Two balances stand out. First, clarity and flexibility: make the rules usable without constraining legitimate innovation. Second, address current risks while preparing for emerging ones, such as AI agents gaining more autonomy, exposure to malicious prompts, or unauthorised data access. The guidance is intended to apply broadly across AI technologies and use cases, with industry feedback encouraged.
Why this matters to CFOs, CROs, and business heads
DBS shared that it is on track to deliver nearly $1b in value from AI, measured through rigorous A/B testing. That means AI isn't a novelty-it's a line-item contributor.
The catch: speed requires brakes. As DBS put it, governance is what lets you go faster with confidence. Without it, the risk cost eventually cancels out the gains.
What good looks like in practice
- Model inventory and risk tiering: Maintain a registry of AI uses across business lines. Classify by customer impact, financial exposure, and data sensitivity.
- Clear accountability: Assign an executive owner, establish an AI governance forum, and set up independent challenge for high-impact models.
- Lifecycle controls: Standardize data lineage, access controls, testing, deployment, and decommissioning. Apply strong vendor oversight and contractual safeguards.
- Human-in-the-loop for high-stakes use: Require human review and documented override paths where customer outcomes or financial stability are affected.
- Measurement and monitoring: Use A/B testing for value attribution. Track drift, stability, bias, and hallucinations with thresholds, alerts, and kill switches.
- Data and model security: Keep a "walled garden" so proprietary data stays inside. Enforce least privilege, comprehensive logging, and prompt/agent red-teaming.
- Shadow AI controls: Publish an approved tools list, block unvetted apps, and provide safe alternatives. Train staff on what's allowed and how to use it.
- MRM integration: Fold AI into your existing model risk framework-validation, performance attestation, stress tests for non-deterministic failure modes.
Embedded AI makes governance harder-here's how to keep it manageable
As AI gets baked into core systems and vendor platforms, oversight can sprawl. Anchor everything to a common knowledge base, common risk language, and shared methodologies across teams.
Focus on reducing ambiguity: define acceptable error rates, escalation criteria, and documentation standards. Build institution-specific processes on that foundation as technology shifts.
Prepare for agents, prompts, and data drift
Expect more autonomous agents interacting with internal systems. Limit their scope with sandboxing, allow/deny lists, token budgets, rate limits, and strong secrets handling.
Models don't stay "set and forget." Watch for model stillness and performance degradation from data drift. Continuous monitoring decides whether a model remains fit for purpose.
Quick checkpoints for 2025 plans
- Run a gap assessment against MAS's draft guidance and your current policies.
- Update risk appetite statements to include AI-specific metrics and thresholds.
- Stand up a model registry and dashboards for KPIs, drift, bias, incidents, and approvals.
- Standardize A/B testing methods to quantify business impact across units.
- Set vendor attestation requirements for training data, privacy, and data residency.
- Write an AI incident playbook: rollback steps, communications, audit trail, and customer remediation.
The takeaway for leadership
Governance is how you scale safely. It protects customers, strengthens trust, and frees teams to ship faster with fewer surprises.
Invest in oversight, monitoring, and staff capability now. The firms that treat governance as an enabler-not a checkbox-will win the compounding gains from AI.
Helpful resource: If you're building team capability and need vetted tools, see this curated list of AI tools for finance.
Your membership also unlocks: