Banking on Trust: NTT DATA's Approach to AI for Finance
AI is scaling across finance. The pressure is clear: improve decisions, move faster, and stay compliant. Trust is the variable that decides whether AI becomes an asset or a liability.
David Fearne, Vice President of AI at NTT DATA, works with banks, insurers and public sector teams to turn ethics and governance into working systems. His stance is simple: fairness, accountability and transparency aren't blockers. They're the way you move at scale without losing control.
Innovation vs. governance is a false trade-off
The fastest programs treat governance as part of design, not an afterthought. Start with intent. Define which decisions AI can influence, which it can't, where humans must own the call, and how risk tolerance changes by use case.
Operationalize it. Set model selection criteria, data provenance rules, evaluation gates and escalation thresholds before build. Keep testing after launch with continuous evaluation, not one-time checks. When teams know the boundaries, they move faster-and regulators get clarity on how decisions are made.
The real risks at scale
The biggest risk isn't model failure. It's overconfidence. Pilots look clean. Production brings edge cases, drift and integration pain with legacy stacks.
Mitigation is practical: define system boundaries and enforce them in code, not just policy. Stand up evaluation frameworks, audit logs and clear escalation paths. Human oversight has to be meaningful-people must challenge, override and feed back into the loop so the system improves over time.
Explainability and accountability as architecture
Accuracy alone doesn't pass in finance. You need to show how a decision was reached, who is responsible and how it can be challenged. Build that into the system from day one.
Explainability should fit the audience: regulators, customers or internal risk-they each need different views. Accountability must be traceable end to end: inputs, model behavior, decision, outcome. Treat both as functional requirements and you cut friction, raise internal trust and reduce surprises.
Use AI to earn trust in customer channels
Customers trust AI when it works with them, not on them. Be transparent about where AI is used, why, and how to appeal outcomes. Small, clear explanations beat technical dumps.
Use AI to remove friction-faster resolution, proactive alerts, smarter support-while keeping humans available for vulnerable, high-impact moments. Frame AI as an assistant to staff and customers. That positioning strengthens relationships.
What finance can learn from other regulated sectors
Healthcare and aviation show the playbook: continuous evaluation, explicit roles and consistent governance. Approvals aren't one-and-done-they're monitored across the lifecycle.
Responsibility is assigned even with automation, so accountability isn't fuzzy when something breaks. Consistency builds trust across regulators, practitioners and the public. Banks that apply the same operational discipline deploy safer, and scale faster.
How NTT DATA makes this work in legacy environments
Most banks don't get to start fresh. NTT DATA integrates responsible AI into what you already run. Think intermediary layers: evaluation services, audit pipelines and decision orchestration that sit alongside core systems. You gain visibility and control without ripping out platforms.
Risk, compliance and tech teams are aligned early. AI behavior is mapped to policy and regulation in ways that are testable and repeatable. Skills transfer is a priority, so teams can govern and adapt long after go-live.
What's next: adaptive, continuous governance
Static rulebooks won't keep pace. The model is continuous oversight: real-time monitoring and automated evaluation paired with clear human accountability. Controls live inside systems, not bolted on top.
Expect sharper differentiation by use case: strict controls for high-impact decisions, lighter touch for low-risk tasks to keep momentum. The banks that can show how systems behave, learn and get corrected will earn more trust-from regulators and customers-and turn responsible AI into an advantage.
Practical checklist for finance leaders
- Set decision boundaries: where AI advises, where it decides, where humans must review.
- Tier use cases by risk and match controls accordingly (explainability, oversight, approval gates).
- Track data lineage and consent. Lock in feature governance and versioning.
- Define evaluation metrics beyond accuracy (fairness, drift, stability, latency, override rates).
- Stand up monitoring, audit trails and escalation paths from day one.
- Design human-in-the-loop that adds real challenge, not rubber stamps.
- Instrument legacy integration with decision orchestration and model registries.
- Publish simple customer explanations and clear appeal routes.
- Assign owners across the lifecycle: data, model, product, risk, compliance.
If you need reference frameworks, the NIST AI Risk Management Framework and summaries of the EU AI Act are useful anchors for policy and controls.
Looking for practical tooling ideas, fast? See a curated list of AI tools for finance that can slot into analytics, risk and operations.
Bottom line
Governance isn't red tape-it's how you scale with confidence. NTT DATA's approach, led by David Fearne, shows that fairness, explainability and human oversight can be built into everyday delivery. Do that, and AI stops being a science project and starts compounding value in your P&L-without surprising your customers or your regulator.
Your membership also unlocks: