Treasury Issues AI Risk Playbook for Banks and Fintechs: What Managers Need to Do Now
AI is moving from pilot to production across finance. That's good for speed and scale - and risky for bias, opacity, and consumer harm if teams speak past each other. Treasury's new guidance solves two core problems: a shared language and a practical way to manage AI risk without slowing delivery.
The two releases - an Artificial Intelligence Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF) - aim to standardize how institutions, regulators, and vendors talk about AI and how they control it from idea to decommission. The framework adapts the federal NIST AI Risk Management Framework to financial operations and consumer protection.
As Treasury's chief AI officer Paras Malik put it, "Clear terminology and pragmatic risk management are essential to accelerating AI adoption in financial services." Derek Theurer, performing the duties of Deputy Secretary of the Treasury, emphasized that national AI priorities need "practical resources," not aspirational statements.
What changed (in plain English)
- Common definitions: Fewer turf wars over whether something is "AI," "ML," or "automation." The lexicon sets shared terms so risk, audit, and product teams stop talking past each other.
- Lifecycle controls: The FS AI RMF walks through use-case scoping, data controls, testing, monitoring, incident response, and accountability - the places failures actually occur.
- Right-sized implementation: Scales from community banks to global institutions so control expectations don't become a blocker to adoption.
- Policy alignment: Built through interagency and industry coordination to sync with broader White House AI policy.
Manager playbook: actions for this quarter
- Adopt the lexicon company-wide: Update policies, model documentation, and training to use the same terms across product, risk, compliance, audit, and vendors.
- Stand up an AI use-case registry: Central inventory with owner, purpose, data sources, model type, deployment tier, consumer impact, and control status.
- Establish lifecycle checkpoints: Require pre-deployment testing (fairness, performance, explainability), post-deployment monitoring (drift, bias, stability), and periodic recertification.
- Define accountability: Name an executive sponsor, product owner, model owner, and control owners. Document approval gates and an escalation path for issues.
- Set explainability thresholds: Where decisions affect customers or compliance, require understandable reasoning or approved surrogates with validation.
- Tighten data controls: Lock down training data lineage, PII handling, vendor access, and retention. Prohibit shadow datasets and unvetted external data pulls.
- Vendor governance: Classify AI vendors by risk tier. Demand model summaries, testing evidence, incident SLAs, and the right to audit or receive third-party assurance.
- Human-in-the-loop where it matters: For high-impact use cases (credit, fraud declines, account closures), require review workflows and override authority with logging.
- Bias and consumer impact testing: Define protected groups, select fairness metrics, test pre- and post-deployment, and remediate with documented trade-offs.
- Incident response for AI: Add AI-specific triggers (data leakage, model drift, unusual denial spikes) to your playbooks, with customer communication templates.
- Board reporting: Quarterly dashboard: number of AI use cases by risk tier, testing results, incidents, vendor status, and remediation timelines.
- Audit readiness: Keep evidence packages: model cards, test results, approvals, monitoring logs, and change history. Make them examiner-friendly.
What to watch next
- Examination language hardens: Expect supervisors to adopt the lexicon in reviews and require alignment in your documentation.
- Heavier focus on monitoring: Controls won't stop at pre-launch. Continuous testing, alerts, and retraining discipline will get closer scrutiny.
- More industry coordination: Treasury will keep syncing agencies and industry groups, which reduces ambiguity - and raises expectations.
If your teams already use the NIST framework, this will feel familiar. The difference is financial-grade expectations and consumer protections layered on top. For context, see the NIST AI RMF overview here.
Talent Signal: O'Melveny Adds Two Trial-Proven Antitrust Partners
O'Melveny brought on Diana Aguilar (from the DOJ) and Lauren Weinstein (from MoloLamken) as partners in San Francisco and Washington, DC. They bring deep experience in complex antitrust litigation, investigations, and enforcement across tech, finance, transportation, media, and energy.
The firm has added dozens of lateral partners since 2023, with a clear push to build trial-ready antitrust strength in key markets. For executives, the takeaway is simple: regulators are prioritizing competition matters, and top firms are staffing accordingly.
Implications for leadership
- Deal strategy: Plan for tougher merger reviews and longer timelines. Build clean-room protocols and upfront remedies into your deal models.
- Algorithmic risk: If you use pricing or bidding algorithms, run counsel-reviewed assessments for tacit coordination risk and vendor data-sharing exposure.
- Communications hygiene: Refresh training on competitor contacts, market signaling, and internal language that can be misread by enforcers.
- Litigation readiness: Maintain decision memos, privilege discipline, and a document map for fast response to civil investigative demands or subpoenas.
Quick checklist to brief your team
- Adopt Treasury's AI lexicon in policies and training.
- Stand up or refresh your AI lifecycle controls using FS AI RMF principles.
- Tier your AI vendors and update contractual obligations.
- Run an antitrust review of pricing, bidding, and data-sharing models.
- Update M&A playbooks for extended review scenarios.
Practical resources
The bottom line: Treasury wants AI adoption with guardrails. Align your language, prove your controls, and keep a clean audit trail. On competition, assume scrutiny and prepare like you'll be asked to show your work.
Your membership also unlocks: