Black-box AI you don't control: compliance and third-party risk in banking

AI models drive key decisions, but many

Categorized in: AI News Management
Published on: Jan 14, 2026
Black-box AI you don't control: compliance and third-party risk in banking

Compliance and risk: managing opaque AI models and third-party dependencies

AI now drives customer interactions, fraud decisions, and credit outcomes at scale. Yet many institutions can't fully explain or control the models they rely on. That gap isn't academic - it's a direct threat to compliance, trust, and revenue.

  • Opacity challenge: Deep learning models infer patterns in ways that aren't directly traceable or predictable.
  • Third-party dependency: Most banks use foundational models from external providers, adding another layer of uncertainty.
  • Regulatory and trust impact: Regulators expect transparency and control; customers expect clear reasons for decisions.

Why opacity breaks the old playbook

Traditional models are linear and explainable. You can trace inputs, assumptions, and outputs with confidence. Many AI systems don't work that way. They learn their own inference patterns, which makes precise explanations hard and predictions less stable.

That mismatch disrupts model risk management. Validating assumptions, testing boundaries, and documenting logic get harder when the logic is emergent and dynamic.

The third-party complication

Foundational models from providers like OpenAI, Anthropic, and Google sit under many banking use cases. Institutions configure them but don't control the core training data, parameters, or update cadence. Providers can change model behavior without your approval, or even notice.

Traditional vendor risk approaches weren't built for this. You're not just evaluating a vendor; you're depending on a moving, opaque system you can't fully inspect.

Where traditional risk management fails

Initial validation becomes a black-box exercise. You validate what you can observe, not how the model reasons. Ongoing monitoring is fragile because performance can shift after a provider update. And model challenge is limited when you can't see training data or architecture.

Regulators are raising the bar. Guidance is emerging globally, including the Monetary Authority of Singapore's focus on AI model risk and the EU AI Act's risk-based approach to controls. These expectations emphasize transparency, accountability, and the ability to explain outcomes to customers and supervisors.

Monetary Authority of Singapore: Responsible AI
EU AI Act overview

Real-world consequences

Frontline teams must explain fraud flags, adverse actions, and credit denials. Black-box answers won't satisfy customers or auditors. If only the provider understands the model's behavior, operational risk compounds - from misclassification to biased outcomes to regulatory breaches.

Trust erodes fast. Complaint volumes rise. Internal audit and the board face gaps they can't credibly close.

A manager's playbook: build control into AI from day one

Here's a practical framework leaders can use to gain control without stalling progress.

  • Set boundaries and risk appetite
    • Map AI use cases by business impact, data sensitivity, and explainability.
    • Define "no-go" areas (e.g., fully automated adverse actions with opaque logic).
    • Require human review for high-impact or low-explainability decisions.
  • Stand up cross-functional governance
    • Create a model risk squad (risk, compliance, data science, legal, procurement, IT).
    • Assign a single accountable owner per model with clear RACI across its lifecycle.
    • Adopt model factsheets for every deployment (purpose, data, limits, owners, KPIs/KRIs).
  • Upgrade vendor due diligence
    • Request model cards, documented safety policies, update cadence, and change logs.
    • Seek summaries on data sources and governance, bias testing practices, and red-team results.
    • Test providers with your datasets; require performance and fairness benchmarks.
  • Contract for control
    • Version pinning or rollback rights; notification windows for material updates.
    • Evidence deliverables: test reports, bias assessments, uptime/SLOs, incident reporting.
    • Clear kill switch, audit rights, and data handling commitments.
  • Engineer for explainability
    • Use interpretable models where required; or hybrid setups (rules + LLM).
    • Apply post-hoc tools (e.g., SHAP, LIME) with documented limits; test counterfactuals.
    • Standardize customer-facing explanation templates and adverse action codes.
  • Strengthen monitoring and controls
    • Continuous input-output testing, bias checks, drift detection, and canary releases.
    • Red-team for prompt injection, data leakage, and unintended behavior.
    • Track KPIs/KRIs: precision/recall, override rates, complaint volume, stability vs. prior versions.
  • Human oversight where it matters
    • Set thresholds for escalation and second-line reviews.
    • Sample decisions regularly; log rationale and reviewer feedback.
    • Tighten controls for vulnerable segments and high-impact products.
  • Prepare for incidents
    • Define triggers for rollback; rehearse playbooks for provider-driven changes.
    • Pre-draft regulator and customer communications.
    • Root-cause, remediate, and update controls after every event.

90-day implementation plan

  • Days 0-30: Inventory AI use cases and vendors; define risk tiers; freeze versioning on high-impact models.
  • Days 31-60: Stand up the model risk squad; agree on factsheet template; add monitoring for drift and bias.
  • Days 61-90: Renegotiate key contracts for update notices and rollback; launch champion-challenger tests; finalize incident playbooks.

Accept the trade-offs - and win with discipline

More interpretable approaches can reduce raw accuracy. Post-hoc explanations are approximations. That's fine. Clear limits, stronger contracts, and continuous testing often beat opaque "best possible" models that you can't defend.

Institutions that combine third-party AI with genuine oversight will outperform. Those that can't explain, predict, or control outcomes will pay for it - with customers, regulators, and their P&L.

Tools and training

If you're building a capability across teams, explore practical resources and tools for finance and risk professionals:

Want more on managing AI-related risk in financial services? You can find out more here: Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide