Singapore Weighs Holding Bank Boards Accountable for AI Risks
Singapore's central bank has proposed new guidelines that put AI accountability squarely on the board and senior management of financial institutions. If adopted, AI risk won't sit with "the tech team" - it will sit where strategy and fiduciary duty live.
What MAS Is Proposing
The Monetary Authority of Singapore (MAS) wants boards - or a delegated board committee - to ensure AI risks are treated as material business risks. That means explicitly integrating AI risks into the firm's risk appetite framework and making governance more than a box-tick.
Senior management would be responsible for rolling out AI risk policies and procedures, and for making sure teams have the competence to do the work. In short: clear owners, clear controls, clear skills.
Monetary Authority of Singapore
Why It Matters Now
Singapore is pushing companies to invest in workforce training while adopting AI at scale. The three local banks are retraining all 35,000 of their Singapore-based employees over the next one to two years - a signal that AI is becoming part of day-to-day operations, not a side project.
MAS Chairman Gan Kim Yong reminded leaders that roles will shift. The priority is keeping people relevant as work changes, not trying to freeze today's job descriptions in place.
A Local Twist: AI Has To Understand Singlish
MAS also flagged a practical hurdle for language models: Singlish and mixed-language conversations. Existing LLMs struggle with the nuance, which raises risks in call centers, wealth conversations, and conduct monitoring.
To close the gap, a government agency will work with financial institutions to build a model that can accurately transcribe and process Singlish and common local dialects, according to MAS Managing Director Chia Der Jiun. For leaders, this is a reminder: validate models on the language your customers actually use.
What Management Should Do Next
- Set board-level ownership: nominate a board committee for AI risk and schedule quarterly reviews.
- Update your risk appetite: define specific AI risk thresholds (e.g., bias rates, model drift, customer error rates, uptime).
- Stand up a model inventory: list every model in use, purpose, owner, data sources, and downstream systems.
- Define approval gates: require model risk review before deployment and after major changes.
- Institute human oversight: put clear human-in-the-loop criteria for high-impact decisions.
- Measure and monitor: track KPIs/KRIs such as false positive/negative rates, drift metrics, latency, and incident counts.
- Test for local language: add Singlish and code-switching test sets for any customer-facing model.
- Tighten vendor risk: require transparency on training data, evaluation results, and update cadence from AI suppliers.
- Train for competence: create role-based upskilling paths for product, risk, compliance, and frontline teams.
- Prepare for incidents: define escalation paths, customer remediation steps, and board notification triggers.
Governance Artifacts To Put In Place
- AI Risk Policy aligned to firm-wide risk appetite (with defined roles and responsibilities).
- Model Risk Standards (documentation, testing, explainability, fairness, and stability requirements).
- RACI for board, management, model owners, compliance, audit, and technology.
- AI Incident Register and post-incident review template.
- Third-party AI register and due diligence checklist.
- Data lineage and retention rules for training, validation, and production logs.
- Customer disclosure guidelines for AI-assisted interactions where required.
Outside Singapore? Treat This As A Signal
Regulators worldwide are tightening expectations on AI governance. Even if you're not under MAS, aligning to board-level accountability and measurable controls will reduce audit friction and speed up approvals.
Upskilling: Make Competence Measurable
Map roles to the skills your AI programs need - data literacy for frontline teams, model oversight for product owners, and AI risk concepts for risk/compliance. Build a blended plan: internal training for context, external programs for depth and certification.
For structured options by role, see Complete AI Training: Courses by Job.
What To Do This Week
- Brief the board on MAS's proposal and agree on oversight responsibilities.
- Run a quick AI risk maturity check: inventory, policies, monitoring, and incident response.
- Identify your highest-impact models and schedule fairness, stability, and language coverage tests.
- Lock in a 12-month training plan with quarterly progress checkpoints.
The direction is clear: AI is now a board matter. Treat it with the same discipline you apply to capital, liquidity, and conduct - and your teams will be ready for whatever the next guideline requires.
Your membership also unlocks: