MAS Guidelines for AI Risk Management: A Practical Brief for Senior Leaders
The Monetary Authority of Singapore has issued a consultation on Guidelines for AI Risk Management. The aim: give financial institutions a clear, proportionate approach to using AI responsibly across the business, including Generative AI and emerging AI agents.
These Guidelines will apply to all FIs. They set expectations for governance, risk oversight, lifecycle controls, and the capabilities needed to deploy AI at scale without exposing the firm to avoidable risk.
Why this matters
This is your signal to treat AI like any other material risk class-strategic, operational, conduct, model, and third-party risk-all wrapped into one program. The message is simple: innovate, but put guardrails in place and make accountability visible from the board down.
Scope at a glance
- Governance and oversight: Boards and senior management own the risk posture and set the tone. Expect clear roles, reporting lines, and a documented framework.
- AI inventory and materiality: Keep a register of AI use cases. Rate each by impact, complexity, data sensitivity, reliance on automation, and potential customer harm.
- Lifecycle controls: Apply proportionate controls across data, fairness, transparency, human oversight, third-party risk, evaluation, monitoring, and change management.
- Organisational capability: Ensure you have the people, processes, and technology to build, buy, or integrate AI safely.
What to assign this quarter
- Board/C-suite: Approve AI risk appetite, reporting cadence, and escalation thresholds. Mandate an AI inventory and materiality assessment across all functions.
- Risk and Compliance: Stand up an AI risk policy aligned to existing model risk, operational risk, and outsourcing standards. Define proportionality tiers and required controls by tier.
- Technology and Data: Implement data lineage, quality checks, and access controls for AI training and inference. Set minimum standards for GenAI prompts, output handling, and logging.
- Procurement and Legal: Tighten vendor due diligence for AI. Update contracts to cover model changes, audit rights, security, IP, and support for incident response.
- Business Owners: Nominate accountable executives for each AI use case. Embed human-in-the-loop checkpoints where outcomes affect customers or carry conduct risk.
30/60/90-day plan
- Day 30: Publish AI risk policy; complete a first-pass AI inventory; freeze deployment of unregistered use cases.
- Day 60: Tier all use cases by materiality; implement baseline controls (data checks, bias tests, monitoring) for high-tier items; start vendor re-papering.
- Day 90: Establish ongoing monitoring with alerts; run tabletop exercises for AI incidents; deliver board reporting with metrics and remediation status.
Lifecycle controls checklist
- Data management: Document sources, consent, lineage, and retention. Block sensitive data from prompts and training unless explicitly approved.
- Fairness and conduct: Test for unwanted bias. Define allowed variables, fallback rules, and remediation steps if drift appears.
- Transparency: Disclose AI use where it affects customers. Keep plain-English model cards and decision summaries for reviews and audits.
- Human oversight: Specify who can approve, override, or stop an AI decision. Use thresholds to trigger human review.
- Third-party risk: Validate supplier controls, model update practices, and security. Require incident notifications and independent assurance where risk is high.
- Evaluation and monitoring: Set acceptance criteria before launch. Track accuracy, stability, drift, bias, latency, and failure modes in production.
- Change management: Treat model retraining and prompt updates as controlled changes. Re-test and re-approve by materiality tier.
- Security and privacy: Segment environments, use key management, and redact sensitive data. Log prompts and outputs for audit while meeting privacy rules.
- Documentation: Keep a single system of record with inventory, tiering, controls, approvals, metrics, and incidents.
GenAI and AI agents: extra safeguards
- Prompt governance: Maintain approved prompt libraries for high-risk workflows. Restrict access and version control changes.
- Output controls: For customer-facing text, use toxicity filters, fact-checking, and watermark or disclosure where appropriate.
- Tool use by agents: Limit what agents can trigger (payments, trades, emails). Sandboxed environments with hard limits and human confirm steps.
- IP and data leakage: Block training on proprietary or client data without approval. Use contractual controls with vendors to prevent reuse.
Capabilities and capacity
Map the skills you need: AI product owners, model risk specialists, data stewards, ML engineers, prompt engineers, and audit support. Fill the gaps with training, hiring, or managed services.
If you need a quick way to upskill teams by role or function, review curated options here: Courses by Job. For practical tooling ideas that pass control requirements, see AI Tools for Finance.
Metrics your board should see
- AI use cases by business line and materiality tier
- Control coverage (planned vs. implemented) and test results
- Incidents, near misses, and customer complaints tied to AI
- Model performance trends and bias indicators
- Third-party exposure and contract coverage status
- Training completion rates for key roles
Timeline and engagement
MAS has invited public feedback on the proposed Guidelines, with submissions due by 31 January 2026. Coordinate an industry-level response with Legal, Risk, and business leads, and align internal plans so you can move once the final version lands.
Useful references
The opportunity is clear: treat AI as a managed capability, not a side project. Put ownership at the top, apply proportionate controls, and build the muscle to adapt as models and use cases evolve.
Your membership also unlocks: