Treasury issues first AI security resources for financial services
Throughout February, the Treasury Department plans to issue six resources to support secure and resilient AI across the financial services sector. The first two are now public, giving institutions a starting point to tighten controls and move faster with confidence.
This work follows the conclusion of the Artificial Intelligence Executive Oversight Group (AIEOG), a public-private effort between the Financial and Banking Information Infrastructure Committee and the Financial Services Sector Coordinating Council (FSSCC). The group convened senior executives, federal and state regulators, and other stakeholders to produce practical tools for financial institutions.
According to Treasury officials, the resources focus on helping institutions-especially small and mid-sized firms-strengthen cyber defenses and deploy AI more securely. Leaders stressed that coordinated action between industry and government can raise resilience across the system.
What's out now
- AI Lexicon: A common vocabulary to reduce ambiguity across risk, compliance, security, procurement, and technology teams.
- Financial Services AI Risk Management Framework (RMF): A sector-specific framework that complements the NIST AI RMF, helping institutions translate principles into bank-ready controls, processes, and metrics.
Treasury stated that this effort advances President Donald Trump's AI Action Plan by strengthening AI security within the financial sector. Industry leaders emphasized that clearer risk identification enables institutions of all sizes to deploy AI safely while creating value for clients.
Why this matters for finance leaders
- Clearer supervisory expectations: The sector now has a shared reference point to align model risk, cybersecurity, and third-party oversight for AI systems.
- Faster cross-functional alignment: A standard lexicon and RMF reduce friction between risk, legal, compliance, procurement, and technology.
- Better vendor management: The RMF can inform due diligence, contract terms, and ongoing monitoring for AI vendors and data providers.
- Stronger operational resilience: Controls for data integrity, model monitoring, incident response, and access management map directly to AI use cases.
What to do next
- Inventory AI use: Catalog current and planned AI models, tools, and vendors across business lines. Note data sources, model purpose, decision criticality, and user access.
- Adopt the lexicon: Standardize terms across policies, training, and vendor documentation to cut miscommunication and speed reviews.
- Map controls to the RMF: Compare your existing model risk, cybersecurity, and data governance controls to the Treasury and NIST AI RMF. Close gaps with clear owners and timelines.
- Tighten third-party risk: Update due diligence checklists for AI-specific issues-training data provenance, model update cadence, red-team results, and incident reporting SLAs.
- Right-size for smaller institutions: Prioritize high-impact use cases, implement lightweight model documentation, and leverage shared services where possible.
- Strengthen monitoring: Define performance, fairness, and drift thresholds; set alerting; and run periodic challenge sessions with independent reviewers.
- Refresh incident response: Add AI failure modes (data contamination, prompt injection, model misrouting) to playbooks and run tabletop exercises.
- Educate decision-makers: Provide short, role-based training for boards, risk committees, and model owners to speed approvals without sacrificing control.
The AIEOG's public-private model, including coordination with bodies like the FSSCC, signals a practical path forward: shared language, common frameworks, and execution focused on measurable risk reduction. Expect additional releases this month to extend these foundations.
For ongoing practitioner guides and sector use cases, explore AI for Finance.
Your membership also unlocks: