AI's Growth Is Outpacing Financial Oversight
Global watchdogs told the G20 that supervisors remain at an early stage in tracking how artificial intelligence alters financial system risk. The Financial Stability Board highlighted material data gaps, limited transparency from providers, and the fast-changing nature of AI models as core blockers. That mix leaves blind spots across market, operational, and conduct risk.
Third-party dependencies are front and center. Many firms can't justify building frontier models in-house, which concentrates risk in a small set of external GenAI providers-similar to cloud. As adoption scales, outages, model changes, or pricing shifts at those providers can propagate across many institutions at once.
Banks such as Goldman Sachs and HSBC are already applying GenAI to back-office workflows and wealth advisory. The FSB also flagged higher exposure to fraud and disinformation as content generation grows in speed and volume.
Separately, the Bank of England and the IMF warned that lofty AI-linked valuations could set up a sharp correction. Together, these signals point to both micro-level control gaps and macro-financial vulnerabilities that boards and risk leaders should treat as priority items.
Implications for CROs, CIOs, and Boards
- Map critical dependencies: inventory every AI system, model, and workflow; identify the external providers behind them; quantify concentration by vendor and region.
- Strengthen third-party risk: require security attestations (e.g., SOC 2, ISO 27001), clear model update logs, explainability documentation, data residency details, and incident response SLAs.
- Build exit options: define rollback plans, data portability, model handover terms, and minimum viable alternatives for high-impact use cases.
- Upgrade model governance: set materiality thresholds, independent validation, performance and bias testing, scenario analysis, benchmark comparisons, and change controls for prompts, features, or training data.
- Operational resilience: include GenAI providers in severe but plausible outage scenarios; test kill switches, rate limits, and degraded-mode playbooks.
- Data safeguards: minimize PII, use encryption and tokenization, monitor for data leakage and poisoning, and restrict training on sensitive datasets.
- Cyber defense: red-team for prompt injection, data exfiltration, model abuse, and API misuse; log prompts and outputs for forensics.
Controls for Client-Facing, Markets, and Wealth
- Human-in-the-loop for client advice, trade ideas, and suitability checks; archive prompts and outputs for audit.
- Clear disclosures on AI assistance; prohibit unsupervised client communications and auto-execution without guardrails.
- Market integrity: pre-publication reviews for research or commentary generated with AI; monitor for deepfakes and false news amplification.
- Fraud controls: strengthen identity verification, anomaly detection, and payment controls against AI-assisted social engineering.
Metrics to Report Upward
- AI usage inventory by business line, risk tier, and criticality.
- Vendor concentration indices and fourth-party exposure.
- Model quality: accuracy, error rates, bias metrics, drift indicators, and override rates.
- Incidents and near misses tied to AI, with financial impact and remediation time.
- Change logs for models, prompts, and datasets with approvals and testing evidence.
Regulatory Direction: What to Watch
Expect more guidance on third-party oversight, model transparency, operational resilience, and disclosure. Coordination through the G20 process suggests convergence on dependency mapping, incident reporting, and baseline governance standards. For updates, monitor the Financial Stability Board.
Action Plan for the Next 90 Days
- Stand up a cross-functional AI risk council (risk, tech, legal, compliance, business) with a single policy set.
- Complete a first-pass register of AI use cases and third-party dependencies across the firm.
- Implement minimum controls for any client-facing GenAI use, including disclosure and human approval.
- Run an outage tabletop on your top two GenAI providers; document contingencies and triggers.
- Align model documentation and testing with existing model risk standards to speed regulatory reviews.
If your team needs a curated view of practical tools for finance, see this overview of AI tools for the sector: AI Tools for Finance.
This month's FSB report-commissioned by the South African G20 presidency and delivered ahead of meetings in Washington-makes one point clear: adoption is moving faster than oversight. Firms that build transparent dependencies, firmwide governance, and credible exit paths will be best positioned as supervisors close the gap.
Enjoy Ad-Free Experience
Your membership also unlocks: