Eight Principles for Managing Risk With AI in Banking
Banks deploying artificial intelligence in risk management face a fundamental challenge: technical capability alone doesn't satisfy regulators. The real test is whether AI decisions remain explainable and defensible under scrutiny.
Shweta Khosla, who spent 15 years in risk management roles at Citigroup, Bank of America, HSBC, and Ernst & Young, has distilled eight principles for bank leaders navigating this terrain. Her work spans investment banking, risk management, and AI strategy across four major institutions and MIT.
The Eight Principles
- Ensemble Models Improve Reliability. At Bank of America, Khosla built an AI-driven tool combining multiple models to automate risk reporting. Ensemble approaches produce more stable outputs for executive and board decisions than single models.
- Calibration Enables Real-World Decisions. Risk appetite frameworks require precise calibration. Without it, AI models fail to deliver the accuracy regulators demand in stress-testing and capital adequacy assessments.
- Explainability Is Essential. At Citigroup, Khosla created transparent audit trails for regulatory compliance. Decisions must remain auditable and defensible. Black-box models create liability.
- Rare but Severe Tail Events Must Be Modeled. Standard models miss extreme scenarios. Khosla's research at MIT examined how quantum computing could enhance simulation of tail-risk events-the kind that triggered the 2008 crisis.
- Risk Appetite Should Shape AI Design. Technology without governance becomes speed without direction. At HSBC and Bank of America, Khosla ensured AI outputs aligned with institutional loss tolerance.
- Cloud Is the Foundation. Cloud infrastructure enables both speed and resilience. At Bank of America, Khosla automated manual data collection through in-house cloud systems.
- Emerging Technologies Will Reshape Risk Modeling. Quantum computing and advanced algorithms will transform optimization, simulation, fraud detection, and encryption. Banks need leaders who understand these shifts.
- Model Herding Can Amplify Systemic Risk. When similar algorithms operate across multiple institutions, synchronized crises become possible. Khosla observed this pattern across Citigroup, Bank of America, and HSBC.
Why This Matters for Bank Leaders
The gap between what AI can do and what regulators will accept is where most implementations fail. Banks that treat AI as a technical problem rather than a governance problem face delays, rejected models, and wasted investment.
Leaders need to understand not just what AI delivers, but why outputs matter under regulatory pressure. That requires people who have worked on both sides-building models and defending them to regulators.
For managers building or overseeing risk functions, the priority is clear: explainability and calibration matter more than raw accuracy. A model that regulators reject has zero value, no matter how precise.
Learn more about AI for Finance or explore the AI Learning Path for CFOs to understand AI strategy and financial decision-making at the executive level.
Your membership also unlocks: