Finance Leaders Must Demand AI They Can Explain, or Risk Their Credibility
The question about AI in finance is no longer whether it works. It's whether you can trust it, understand it and defend it to an auditor.
That shift in focus emerged from a panel at Sage Future 2026 in San Francisco, where finance executives, technologists and analysts examined why AI adoption in finance hinges on transparency rather than raw capability. The consensus was clear: vendors that bolt explainability onto existing systems after the fact will not pass audit scrutiny.
Explainability Is Not Optional. It's Functional.
Finance has always required precision. Now it requires proof.
More than half of finance professionals report improved confidence in AI after seeing systems that surface reasoning and sources alongside every output, according to IDC research. But explainability is not a feature to add later. It is part of functionality itself.
"If I can't figure out why it did what it did, then it's not functional," said Kevin Permenter, an analyst at IDC. "It's a paperweight."
That standard raises the bar for every vendor. A 2026 survey found that 72% of financial institutions are only partially aware of which vendors use AI, and not a single organization feels "extremely confident" managing AI-related risks. The implication is direct: platforms that wrap transparency around an existing model as an afterthought will fail.
Trust, Permenter said, "equals revenue."
Finance AI Must Be Deterministic, Like Accounting Itself
Aaron Harris, CTO at Sage, framed the technical requirement plainly: finance AI must be deterministic and invariant in the same way sound accounting principles are.
"Finance and programming are elegant and deterministic," Harris said. "They must always hold true to be valid."
Sage's response is a system architecture built on three layers: an integrated user interface, an agent operating system and an "arbiter" - a dedicated layer that sits between users and the AI to detect hallucinated content, jailbreak attempts, prompt injection and toxic outputs before they surface in financial workflows.
Harris conducted a personal experiment to test where the technology stands. He built an accounting agent that consumed 220 million tokens in a single week through Anthropic's API. The finding: off-the-shelf models are insufficient for enterprise accounting tasks.
The problem is not accuracy alone. It is the model's capacity for competent, safe judgment within the specific semantics of finance. Finance has its own linguistic context, and the arbiter layer translates that context before AI outputs reach decision-makers.
AI Adoption Depends on People, Not Just Technology
Agentic AI is already reshaping daily workflows. Sage AI Labs processed 40 million predictions in 2025. That figure climbed to 400 million in 2026. The company's Sage Intacct platform already deploys AI for overdue invoice identification, variance analysis and real-time reconciliation support.
But technology adoption is dictated by people's willingness to use it, said Michael O'Reilly, founder of GrowCFO. "Creating an environment where people see it as of service to them" is the foundational work no vendor can shortcut.
CFO Accountability Is Now a New Standard
CFOs must be able to explain AI-driven outputs. That accountability cannot be delegated.
"If you put the numbers in the spreadsheet, they're your numbers," O'Reilly said. "AI is the same thing. It's not a get-out-of-jail free card."
The reputational dimension is the most underappreciated risk in the conversation. When AI output is wrong, it damages personal brand and individual credibility. That exposure goes deeper than inaccurate forecasts.
Mature ERP platforms must enforce least-privilege access for agents, separate reasoning from execution and build immutable audit logs of every agentic decision pathway.
What Finance Leaders Should Evaluate Now
Vendor selection criteria must evolve beyond feature sets. When evaluating ERP platforms with AI, ask whether trust is architected into the core platform or applied as a surface-layer control.
- Can the system show its work? Can it trace every recommendation to source data?
- Does it hold up under audit scrutiny?
- Does it provide role-based transparency across every finance module?
- Are audit trails immutable and deterministic?
Systems that meet these standards are the only ones that belong in the financial close.
Learn more about AI for Finance and explore the AI Learning Path for CFOs to understand how to evaluate and implement AI systems responsibly.
Your membership also unlocks: