NTT DATA: How AI Innovation Can Help Healthcare & Finance
AI is moving from small pilots to enterprise deployment. In regulated sectors like healthcare and finance, the winners are the teams that treat governance as the thing that lets them move fast safely.
That's the stance of David Fearne, Vice President of AI at NTT DATA. His work focuses on turning principles like fairness, accountability and transparency into concrete system design, controls and daily practice across banking, healthcare and the public sector.
Who is David Fearne, and why this matters to healthcare
Fearne leads AI at NTT DATA, a global technology and consulting firm with deep experience in banking, insurance and healthcare. His message is simple: responsible AI is an enabler of trust and adoption. Build governance, explainability and human oversight into every stage, and AI becomes safer, more scalable and easier to audit.
Healthcare leaders can apply the same approach used in finance: set clear decision boundaries, match controls to risk, and make accountability traceable from data inputs to outcomes that affect patients.
Governance is the speed enabler
Innovation and governance aren't competing goals. The best programs start with intent: which decisions can AI influence, where must it defer to humans, and how much risk is acceptable for each use case.
Not every model needs the same explainability or control. Treating them all the same creates friction. Define rules up front, build them into the delivery lifecycle, and evaluation becomes continuous, not a one-off gate.
- Be explicit about decision rights: clinical triage vs. non-clinical admin tasks will need different guardrails.
- Set risk tiers with matching levels of explainability, monitoring and human oversight.
- Decide escalation thresholds before deployment. Make them technical, not just policy on paper.
- Track data provenance and quality from source to prediction.
Real risks at scale (and how to cap them)
The big risk isn't just model failure; it's overconfidence after a successful pilot. Scale brings edge cases and drift. Opacity is another risk. If decisions can't be explained, accountability blurs-especially with patient-facing or credit-related outcomes and legacy systems in the mix.
- Define system boundaries: what the AI can do, what it must not do, and technical controls that enforce those limits.
- Stand up audit logs, replayable evidence and clear escalation paths for exceptions and safety events.
- Make human oversight meaningful. Humans shouldn't rubber-stamp outputs; they should challenge, override and feed learning loops.
- Monitor for performance drift and fairness issues continuously, not just at go-live.
Explainability and accountability by design
Accuracy alone isn't enough. You need to explain how a decision was reached, who owns it and how it can be challenged. That doesn't demand a fully interpretable model in every case-it requires appropriate explanations for each audience: regulator, clinician, patient or internal risk team.
- Provide context-aware explanations (e.g., key factors behind a triage suggestion) without overwhelming users with jargon.
- Make ownership traceable from data sources to model behavior and final action.
- Treat explainability and accountability as functional requirements, not optional extras.
Building trust with patients and customers
Trust grows when people feel AI works with them, not on them. Be clear when AI is used, what for and how to appeal outcomes. Use AI to reduce friction-faster resolutions, more relevant support-without hiding decisions behind black boxes.
- Offer simple explanations and a fast route to a human, especially for sensitive, high-impact or emotionally charged cases.
- Keep records of appeals and overrides and feed them back into model improvement.
- Use plain language notices so patients know their options and rights.
What healthcare can borrow from aviation and banking oversight
Highly regulated fields continue to innovate by being crystal clear about acceptable risk, boundaries and escalation. They don't "approve once and forget." They monitor and adjust through the lifecycle.
- Adopt continuous evaluation and post-deployment monitoring as normal practice.
- Assign named responsibility even when automation is involved-no ambiguity if something goes wrong.
- Apply governance consistently, not selectively, to build confidence with clinicians, auditors and the public.
How NTT DATA operationalises responsible AI in complex, legacy environments
Most institutions aren't starting from a clean slate. Fearne's team integrates responsible AI into existing systems and controls instead of forcing wholesale replatforming. They add intermediary layers-evaluation services, audit pipelines and decision orchestration-that sit alongside legacy platforms.
They also align delivery with risk and compliance processes and prioritise skills transfer so clients can govern, adapt and improve their AI long after the initial engagement.
- Add a monitoring and evaluation layer for drift, bias and safety signals without disrupting core systems.
- Map AI behavior to regulatory expectations (e.g., the EU AI Act) in a testable, repeatable way.
- Integrate AI changes into existing change control, model risk and clinical safety workflows.
- Upskill teams to own the governance day to day-don't outsource accountability.
What's next: adaptive, continuous governance
Static rulebooks won't keep pace with new models and use cases. The future is continuous oversight: real-time monitoring and automated evaluation paired with clear human accountability.
Controls will differ by use case. High-impact decisions carry stricter checks; lower-risk tasks get lighter touch to keep delivery moving. Transparency becomes a differentiator-organisations that can show how systems behave, learn and get corrected will earn trust with regulators and patients.
Quick-start checklist for healthcare leaders
- Inventory AI use cases and score impact and risk (clinical safety, equity, privacy, security, financial).
- Define decision rights: where AI assists, where it recommends, where it must defer to a human.
- Set risk tiers with matching explainability, monitoring cadence and escalation paths.
- Document data provenance and quality checks; log feature sources and transformations.
- Establish fairness metrics that reflect your patient population and relevant protected groups.
- Create pre-deployment test plans and continuous evaluation (performance, drift, bias, safety events).
- Build technical guardrails: rate limits, input validation, policy constraints, safe fallbacks.
- Stand up incident response: detection, triage, rollback, notification and root-cause analysis.
- Maintain end-to-end audit trails for models, prompts, versions and overrides.
- Write patient-facing explanations and appeal processes in plain language and train staff on them.
- Assign accountable owners (product, clinical safety, data, model risk) and meet on a regular cadence.
- Invest in upskilling so clinicians, data teams and compliance share a common playbook.
If your team needs structured learning paths to build these capabilities, explore role-based options here: AI courses by job role.
Your membership also unlocks: