Stop Copying, Start Governing: A Practical AI Framework for Finance

Finance leaders warn: define the problem, govern your data, and build for explainability-or you'll pay later. Start small, document changes, measure well, and skip copycat stacks.

Categorized in: AI News Finance
Published on: Mar 06, 2026
Stop Copying, Start Governing: A Practical AI Framework for Finance

Building Better AI in Finance: A Practical Governance Framework

March 5, 2026

Financial institutions are sprinting into AI - and picking up other people's problems along the way. That was the clear warning from a recent webinar hosted by RegTech firm Hawk in partnership with ACAMS, featuring Adrianna Fabijanska (ING), Michael Morrison (Wintrust Financial Corporation), and Kyle Daddio (Grant Thornton US), moderated by Erica Brackman (Hawk).

The takeaway: define the problem, govern the data, and build for explainability - or you'll pay for it later.

Define the problem first

AI that doesn't serve a specific purpose wastes time and invites risk. As Michael Morrison put it, "Good AI isn't just accurate, it's operationally embedded and defensible. This starts at the point of selecting the right AI model by establishing what problems you're trying to solve with it."

Data quality sits at the core. ING's Adrianna Fabijanska stressed that poor data governance creates poor outcomes. Structure your data, document lineage, and fix quality issues before deployment - not after you're buried in unexplained false positives.

Don't follow the crowd

Kyle Daddio cautioned against the "copycat" mentality: replicating a competitor's stack without checking fit. "What really ends up happening is you're doing what was good for somebody else, not what's good for your organization."

Set long-term goals, involve the board early, and resist reactive rollouts. Erica Brackman added that vendor selection is pivotal: everyone claims to cut false positives, but only solutions that align with your risk profile, data, and systems will deliver.

Governance is an asset, not a brake

Sustainable AI programs run on clear intent and documented change. Morrison outlined a defensible framework: purpose statements, data lineage, performance metrics, and rigorous change management that tracks model updates over time.

Fabijanska flagged a common organizational risk: concentration of knowledge. "If an analyst can't explain why they're making the decision they are - or if an examiner comes and asks a question and there's only one person who can answer it - the AI you've designed is flawed." Broad internal literacy makes you regulator-ready.

Start narrow, prove value, then scale. Morrison recommended lower-complexity use cases to surface issues early and build credibility with auditors before attempting enterprise-wide deployments.

A simple operating checklist

  • Write a plain-language purpose statement for each model. Tie it to a specific control gap or efficiency target.
  • Map risk appetite to use cases. Define what "good" looks like for accuracy, coverage, and cost per alert.
  • Inventory data sources and document lineage. Set quality thresholds and remediation paths.
  • Select models based on the problem, constraints, and explainability needs - not hype or peer pressure.
  • Design human-in-the-loop reviews where decisions affect customers, investigations, or filings.
  • Automate documentation and change logs. Capture feature changes, retraining, and parameter shifts with timestamps.
  • Track performance with stable metrics: precision/recall, alert rate, time-to-close, and investigator overturns.
  • Stand up independent validation and audit trails. Recreate past decisions on demand.
  • Deliver analyst-facing explanations. If they can't justify an alert, you will struggle with examiners.
  • Pilot with a narrow scope, then scale in phases. Set clear exit criteria and rollback plans.
  • Engage the board and C-suite with concise dashboards and decisions that tie to strategy and risk.
  • Vet vendors for data fit, observability, and interoperability - not just promises on false positives.

Tools that reduce drag

Putting these principles to work requires technology that handles the heavy lifting: automated documentation, change tracking, and explainable outputs. That lets compliance teams manage model lifecycles without relying entirely on data science support.

If you want reference points to align your program, the NIST AI Risk Management Framework offers a useful structure for governance and risk controls.

For finance-specific training and playbooks, see AI for Finance, and if you're leading board-level discussions or budget decisions, the AI Learning Path for CFOs can help align strategy, controls, and value creation.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)