MAS Consultation Paper: AI Risk Management Guidelines for Financial Institutions
Corporate | 15/12/2025
The Monetary Authority of Singapore (MAS) has issued a Consultation Paper proposing Guidelines on Artificial Intelligence Risk Management (AIRM Guidelines) for the financial sector. These proposals build on the earlier FEAT principles-Fairness, Ethics, Accountability, and Transparency-and move them into clearer, more actionable expectations for firms.
If you run or oversee AI projects in a bank, insurer, or asset manager, the message is simple: treat AI like any other material risk. Set clear ownership, test before you deploy, monitor after you deploy, and keep people in the loop where outcomes matter.
Key components at a glance
1) Governance and AI oversight
- Board and senior management own AI risk outcomes and set the tone for responsible use.
- Define roles for model owners, risk, compliance, audit, and technology teams.
- Embed AI risks into existing enterprise risk frameworks, policies, and reporting.
2) Key AI risk assessment and management
- Identify and label AI use cases consistently across the organisation.
- Maintain a current inventory of AI systems, models, datasets, and third-party components.
- Apply a structured method to assess risk materiality, and scale controls to the risk.
3) AI life cycle controls
- Data management: Use data that is fit for purpose, representative, and high quality, with clear ownership and lineage across the AI life cycle.
- Transparency and explainability: Increase explanation depth when AI decisions affect customers or risk outcomes in a meaningful way.
- Fairness and bias: Define fairness for each use case, test for harmful bias, and mitigate issues promptly.
- Human oversight: Keep qualified humans in control, with review points and escalation paths that actually work in practice.
- Third-party AI: Assess vendor models and data with the same rigor as in-house systems; match oversight to risk.
- Algorithm and feature selection: Choose methods and features that align with objectives and risk limits; document choices and trade-offs.
- Evaluation and testing: Test performance, stability, drift, and unintended impacts in proportion to risk materiality.
- Technology and cybersecurity: Run AI on secure, resilient infrastructure with strong access controls, monitoring, and incident response.
- Documentation and auditability: Keep clear, reproducible records from design to deployment so audits are straightforward.
- Pre-deployment reviews: Conduct independent model risk review and cybersecurity checks before go-live.
- Post-deployment monitoring: Track performance, bias, data drift, and incidents; review aggregate risks periodically.
- Change management and decommissioning: Follow controlled change processes and retire systems cleanly when needed.
4) AI capabilities and capacity
- Ensure teams building and operating AI systems have the right skills, conduct, and resources for the level of risk.
- Refresh training as methods, tools, and regulations evolve.
- Maintain technology infrastructure that meets performance, resilience, and security needs in line with regulatory and industry standards.
How MAS expects this to work
Stakeholder engagement
MAS is seeking feedback from industry stakeholders through 31 January 2026. Input from practitioners will help ensure the Guidelines are practical and outcomes-focused.
Implementation timeline
After the Guidelines are issued, MAS proposes a 12-month transition period. Institutions can phase in changes without disrupting operations, while prioritising higher-risk use cases first.
What leaders should do now
- Within 30 days: Stand up an AI risk working group spanning business, risk, compliance, technology, data, and audit. Start your central AI inventory.
- Within 60-90 days: Define a risk materiality framework, set documentation standards, and agree minimum controls for high-, medium-, and low-risk use cases.
- Before any new deployment: Require independent model risk and cybersecurity reviews; verify explainability, fairness testing, and human oversight are in place.
- Ongoing: Monitor model performance and drift, review aggregate AI risk quarterly, and refresh training for teams managing high-impact use cases.
Why this matters for finance and management
- Better decision quality: Clear data standards and testing reduce bad outcomes and customer harm.
- Operational resilience: Stronger controls cut model incidents, downtime, and remediation costs.
- Regulatory confidence: Documented, repeatable processes make supervisory reviews smoother.
- Faster scaling: A consistent playbook for AI lets you deploy new use cases with less friction.
Conclusion
The AIRM Guidelines push AI from pilots and proofs of concept into disciplined, accountable practice. Institutions that set ownership, inventory their AI, assess materiality, and apply life cycle controls will be ready for the final rules-and earn trust from customers and supervisors.
If your teams need practical upskilling to meet these expectations, explore focused training and tools for finance professionals at Complete AI Training.
Your membership also unlocks: