Singapore's MAS Opens Consultation on AI Risk Guidelines for Financial Institutions

MAS proposes AI risk guidelines for FIs, setting clear expectations for governance, life cycle controls, and skills. Standards scale by firm size, risk, and uses, including GenAI.

Categorized in: AI News Management
Published on: Nov 15, 2025
Singapore's MAS Opens Consultation on AI Risk Guidelines for Financial Institutions

MAS Proposes AI Risk Management Guidelines for Financial Institutions

The Monetary Authority of Singapore (MAS) has released a consultation paper proposing Guidelines on AI Risk Management for all financial institutions. The guidance sets clear supervisory expectations across governance, systems and policies, AI life cycle controls, and the capabilities needed to use AI responsibly.

MAS is taking a proportional approach. Expectations scale with each firm's size, business model, and the risk level of AI use cases, including Generative AI and newer patterns such as AI agents.

What Boards and Senior Management Need to Own

Leadership is accountable for AI risk management. This means setting direction, resourcing the work, and embedding a risk-aware culture for AI across the organization.

  • Appoint a senior executive accountable for AI risk and establish clear reporting lines to the board.
  • Define AI risk appetite and thresholds for model performance, conduct risk, and customer outcomes.
  • Approve firm-wide AI policies and minimum standards for all use cases, including third-party solutions.
  • Set a regular review cadence for high-impact use cases and incidents.
  • Ensure budgets, skills, and tools are in place to meet the guidelines.

Firm-Wide Systems, Policies, and Inventories

FIs should know where AI is used, why, and how. MAS expects clear identification processes and an accurate, current inventory of AI systems.

  • Maintain a central AI inventory with owners, purposes, data sources, model types, and deployment status.
  • Classify each use case by impact, difficulty, and reliance to determine risk materiality.
  • Document data lineage and access controls; track customer-facing vs. internal-only use.
  • Map third-party and open-source dependencies for each use case.

AI Life Cycle Controls to Put in Place

Controls should cover the full life cycle and be proportionate to risk. Focus on outcomes that protect customers, the firm, and the market.

  • Data management: data quality, provenance checks, consent and usage limits, PII safeguards.
  • Fairness: define metrics, test for bias, and document mitigations and trade-offs.
  • Transparency and explainability: right-sized disclosures for customers and internal stakeholders.
  • Human oversight: clear human-in-the-loop criteria, escalation paths, and override controls.
  • Third-party risk: due diligence, contractual controls, ongoing monitoring, and exit plans.
  • Evaluation and testing: pre-deployment testing, scenario and stress tests, and independent review.
  • Monitoring: drift detection, performance and conduct monitoring, alerting, and issue logging.
  • Change management: approvals, versioning, rollback plans, and controlled releases.
  • Incident response: clear playbooks for model failures, data issues, and customer harm.

Capabilities and Capacity

MAS expects firms to have the people, processes, and tools to meet their AI ambitions. Oversight should be independent where needed and supported by skilled teams.

  • Build cross-functional teams spanning risk, compliance, legal, data science, engineering, and product.
  • Stand up MLOps/LLMOps: model registry, testing pipelines, access controls, and audit trails.
  • Provide targeted training for model owners, reviewers, and business users.
  • Set up independent validation for higher-risk use cases.
  • Align procurement and vendor management with AI-specific requirements.

A 90-Day Action Plan

  • Days 0-30: Appoint the accountable executive. Draft an AI risk policy and taxonomy. Start the AI inventory. Pause or add guardrails to any high-risk, ungoverned use.
  • Days 31-60: Implement risk materiality scoring (impact, difficulty, reliance). Define mandatory controls by tier. Review top third-party AI suppliers. Establish baseline metrics and reporting.
  • Days 61-90: Secure board approval of the framework. Launch training for model owners and reviewers. Run a tabletop exercise for an AI incident. Prepare your response to the consultation and plan remediation timelines.

Why This Matters

The guidelines build on MAS' 2024 thematic review of banks' AI use and ongoing discussions with the industry. As Deputy Managing Director Ho Hern Shin said, the proposed approach sets clear expectations that enable responsible innovation-provided firms implement safeguards to address key AI risks.

Source and Next Steps

Review MAS materials and evaluate your current practices against these expectations. Begin closing gaps now to avoid a scramble later.

Build Team Capability

If you need structured upskilling for managers and practitioners, explore role-based programs and tools that align with these controls.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide