U.S. Treasury launches six-part secure AI playbook for financial institutions

US Treasury releases practical AI guidance for finance: a lexicon and a sector-fit NIST RMF. Start small-baseline maturity, map risks to controls, and help lean teams move safely.

Categorized in: AI News Finance
Published on: Feb 21, 2026
U.S. Treasury launches six-part secure AI playbook for financial institutions

Treasury's secure AI guidance for finance: what's out now and how to act on it

The U.S. Treasury Department has started releasing resources to help financial institutions use AI securely while meeting regulatory obligations. It's a six-part series built with input from financial executives and federal and state regulators, aimed at practical implementation rather than new mandates.

The first two resources are live: an AI lexicon for finance and a sector-focused version of NIST's AI Risk Management Framework. The framework package includes a maturity questionnaire, a matrix linking AI risks to security controls, and implementation guidance you can plug into existing programs.

The work comes out of Treasury's Artificial Intelligence Executive Oversight Group, which partnered with sector councils and other stakeholders. A key goal is to support small and mid-sized institutions with concrete steps that raise resilience without overloading lean teams.

Why this matters for your P&L and risk stack

Demand is clear: banks aim to cut fraud losses, insurers want sharper risk signals, and markets seek better pattern recognition. According to a 2025 World Economic Forum report, roughly a third of work in capital markets, insurance, and banking could be fully automated by AI.

The risk is just as real. Weak models can leak sensitive data, biased models can fuel discrimination, and correlated models can move markets in lockstep. Regulators also face capacity gaps, leaving firms to raise their own floor on AI controls until supervision catches up.

What Treasury has shipped so far

  • AI lexicon for finance: Common definitions for terms that often cause confusion across risk, tech, and business lines.
  • Finance-focused NIST AI RMF: A maturity self-assessment, a risk-to-controls mapping, and guidance for putting controls into practice across the model lifecycle.
  • Focus areas shaping next releases: Governance, fraud prevention, identity management, and transparency-driven by AIEOG workstreams.

If you want the source framework, see NIST's AI Risk Management Framework here.

Turn the guidance into action in your institution

  • Stand up ownership: Name an accountable executive, set board oversight, define RACI across business, risk, compliance, and technology.
  • Inventory AI use: Catalog models and vendors by business line, criticality, and data sensitivity. Include pilots and shadow projects.
  • Run the maturity check: Use Treasury's questionnaire to baseline current state and set target scores with quarter-by-quarter milestones.
  • Build a risk taxonomy: Model risk, data privacy, cybersecurity, third-party, and fairness. Map each to controls you already use (e.g., NIST 800-53, CIS, ISO 27001) and fill gaps using the provided matrix.
  • Wire controls into delivery: Add gates in your SDLC/MLOps for data sourcing, approval, testing, and release. Require model cards, change logs, and versioning.
  • Data governance first: PII minimization, encryption, secrets management, lineage, retention rules, and privacy impact assessments. Consider synthetic data where it reduces exposure.
  • Model quality and resilience: Performance SLAs, drift detection, explainability thresholds, human-in-the-loop for high-risk decisions, and safe fallbacks or kill switches.
  • Fairness and consumer compliance: Test for disparate impact, document methodologies, and support adverse action notices under Reg B/ECOA and related laws.
  • Third-party risk: Due diligence on providers (security attestations, model documentation, incident history), contractual controls, and exit strategies.
  • Security and red-teaming: Access control, sandboxing, prompt and input filtering, logging, and periodic AI-specific red-team exercises to probe data leakage and model manipulation.
  • Incident response: Playbooks for model failure, bias events, and data exposure, with clear escalation paths and regulatory reporting triggers.
  • Metrics and reporting: Track false positives/negatives, drift, bias deltas, uptime, control coverage, and open risk items-then brief the board quarterly.
  • Upskill the workforce: Train first, second, and third lines on AI controls, documentation standards, and testing fundamentals. Run tabletop exercises with business owners.

Practical tips for small and mid-sized firms

Start with the lexicon to align language across teams, then run the maturity questionnaire to pick three high-impact gaps. Prioritize controls that reduce loss exposure fast: data protection, model approval gates, and monitoring with clear rollback paths.

Leverage vendor capabilities where it saves time, but keep decision rights, documentation, and testing in-house for material use cases. Socialize simple checklists so product teams can ship without constant meetings.

What to watch next

Treasury plans four more resources informed by governance, fraud, identity, and transparency workstreams. Expect additional implementation guidance that can slot into model risk management, cyber, and compliance programs without rewriting your entire control set.

Helpful resources

Bottom line: pick one business outcome, one model, and one control gap to close this quarter. Then repeat. Consistency beats big-bang programs-especially with AI.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)