MAS Proposes AI Risk Management Guidelines for Financial Institutions
The Monetary Authority of Singapore (MAS) has released a consultation paper proposing Guidelines on AI Risk Management. The aim: give financial institutions clear, practical expectations for the responsible use of AI across their businesses.
The Guidelines apply to all FIs and are proportionate by design. They cover traditional models, generative AI, and newer developments such as AI agents, with expectations that scale with an institution's size, risk profile, and AI footprint.
What MAS Expects
- Governance and oversight: Boards and senior management should set the tone, own the risk, and establish frameworks, structures, policies, and processes for AI use.
- Enterprise view of AI: Identify AI use across the organisation, maintain an accurate inventory, and assess risks by impact, complexity, and reliance.
- Lifecycle controls: Implement controls for data management, fairness, transparency and explainability, human oversight, third-party risks, evaluation and testing, monitoring, and change management.
- Capabilities and capacity: Ensure sufficient skills, resources, tools, and escalation paths to manage AI across functions and geographies.
- Proportionality: Calibrate controls to the materiality of AI-related risks, not a one-size-fits-all checklist.
Why This Matters for Finance and Management
AI now touches core functions: underwriting, trading, fraud detection, client engagement, operations, reporting, and BCM. This proposal turns broad principles into actionable expectations that regulators can supervise against.
It aligns with existing good practice such as MAS' FEAT principles and international AI risk standards. For reference, see MAS' FEAT principles and the NIST AI Risk Management Framework.
Immediate Actions for the Next 90 Days
- Appoint an accountable executive and refresh your AI policy to reflect proportionality and materiality.
- Map all AI use cases (including pilots, vendor tools, and embedded model features) and build or update your AI inventory.
- Define a risk-tiering method based on impact, complexity, and reliance; tie control baselines to each tier.
- Set lifecycle controls: data quality and lineage, fairness checks, explainability criteria, human-in-the-loop thresholds, change management, and incident handling.
- Review third-party exposure: contracts, model access, data rights, evaluation obligations, and ongoing monitoring.
- Stand up evaluation and testing routines for pre-deployment and continuous monitoring; document metrics and thresholds.
- Brief the board on governance, material risks, and residual exposures; align on resourcing and timelines.
- Prepare input for MAS with concrete feedback on proportionality, testing expectations, and third-party responsibilities.
Lifecycle Control Checklist
- Data: Sourcing, permissions, lineage, quality, bias mitigations, and retention aligned to use and regulation.
- Fairness: Define protected attributes, test for disparate impact, and document mitigations and trade-offs.
- Transparency and explainability: Set requirements by use case (e.g., credit vs. marketing), and ensure user-appropriate explanations.
- Human oversight: Escalation paths, override rights, and sampling reviews, especially for high-impact decisions.
- Third parties: Due diligence, model cards or equivalent documentation, service levels, audit rights, and change notifications.
- Evaluation and testing: Pre-deployment tests, adversarial and red-team exercises, drift detection, and performance thresholds.
- Monitoring: Outcome tracking, bias and error rates, alerting, incident logs, and remediation timelines.
- Change management: Versioning, approvals, rollback plans, and control re-validation after model or data changes.
Scope and Proportionality
The Guidelines cover a broad set of AI applications and technologies, including generative AI and AI agents. MAS expects firms to right-size controls based on their activities, AI usage, and risk profile-intensive where the stakes are high, lighter where risks are limited.
What MAS Has Seen
The proposal builds on MAS' 2024 thematic review of major banks' AI use and subsequent industry discussions. As stated: "The proposed Guidelines on AI Risk Management provide financial institutions with clear supervisory expectations to support them in leveraging AI in their operations. These proportionate, risk-based guidelines enable responsible innovation by financial institutions that implement the relevant safeguards to address key AI-related risks."
Preparing Your Feedback to MAS
- Thresholds for materiality and how to scale controls across tiers.
- Expectations for gen AI and AI agents, including data leakage and prompt injection risks.
- Explainability standards by use case, and acceptable documentation for third-party models.
- Human-in-the-loop requirements for high-impact decisions and incident reporting triggers.
- Testing frequency, performance/quality metrics, and minimum monitoring baselines.
- Accountability split between FIs and vendors, plus cross-border data and audit rights.
Timeline and Next Steps
MAS invites comments on the proposals by 31 January 2026. Start your internal gap assessment now, line up board engagement, and consolidate feedback from risk, compliance, technology, data, and business teams.
Skills and Enablement
If you are building competency across risk, data, and product teams, you can explore role-based learning paths here: AI courses by job.
Your membership also unlocks: