Finance wants AI at scale. Most are still stuck in pilot mode
Financial services leaders say AI is delivering results, yet enterprise-wide rollouts remain rare. According to new survey data from Riverbed, 88% of AI projects in the sector are not deployed across the enterprise, and 62% are still in pilot or development.
Confidence remains high-nearly two-thirds feel good about their AI strategy, and 89% say AIOps returns meet or beat expectations. But only 12% report enterprise-wide deployment, and just 40% feel fully prepared to operationalise AI across the business. That gap between intent and execution is the story.
The data problem you can't ignore
Data quality is the blocker. While 92% agree that better data quality is critical to AI success, only 43% are fully confident in their data's accuracy and completeness. Without trusted, well-governed data, scaling AI is guesswork.
Regulators are watching. In the UK, the Financial Conduct Authority is reviewing how AI could reshape retail markets by 2030. You can read their approach and discussion papers here: Financial Conduct Authority.
There are bright spots. Lloyds Banking Group reported roughly £50 million in value from generative AI in 2025 and expects over £100 million in 2026 as they expand across operations and customer experience. Scale is possible-if the plumbing is right.
Tool sprawl is killing signal
On average, firms run 13 observability tools from nine vendors to monitor AI. That fragmentation makes it hard to correlate issues across apps, networks and user experience. No surprise, 96% are consolidating tools and vendors, and 95% say a unified observability platform would speed up detection and resolution.
In regulated environments, auditability and resilience aren't optional. Fewer, better-integrated tools with clear lineage, SLOs and evidence trails beat a patchwork every time.
Unified communications are a silent drag on productivity
Employees spend 41% of their week in UC tools, yet only 47% are very satisfied with performance. 44% report regular issues across video, messaging and collaboration. UC incidents make up 16% of IT tickets, with a 41-minute average time to resolve-and almost one in five take more than an hour.
If you're scaling AI while collaboration platforms stall during peak loads, you're working against yourself.
Open standards are winning
OpenTelemetry is becoming standard in finance: 92% already use it, 99% say it reduces lock-in, and 97% view it as a foundation for future initiatives like AI-driven automation. If your telemetry isn't portable and correlated across domains, your AI Ops will plateau.
Learn more at OpenTelemetry.
Networks and data movement matter more than you think
94% say AI data movement is important to their strategy; 37% call it critical and foundational. Network performance and security rank at the top (81% cite both as essential). Looking ahead, 76% plan to formalise an AI data repository strategy by 2028-spanning public cloud, edge and co-lo, under tighter governance.
Translation: data locality, latency and egress costs will decide which AI use cases scale profitably.
What finance leaders should do next
- Make data quality a program, not a project: Stand up data contracts, lineage, and ownership. Establish golden datasets for priority use cases. Track data issue escape rates to production and fix them at the source.
- Operationalise AI with clear guardrails: Build an MLOps stack with a model registry, versioning, approvals, SLOs and automated rollback. Bake in privacy, fairness and audit checks before release.
- Consolidate observability: Standardise on OpenTelemetry, reduce tool count, and enable cross-domain correlation (app, network, endpoint, UX). Your goal: one pane of glass, shared signals, faster MTTR.
- Treat the network as a first-class dependency: Plan capacity for AI traffic, enforce QoS for key apps, and encrypt in motion. Watch egress as a percent of AI spend. Decide where data lives before you design the model.
- Fix UC fundamentals: Set clear SLOs for call quality and latency, auto-diagnose root causes across network/app/device, and publish weekly MTTR and satisfaction scores. Reduce the 41-minute average to under 20.
- Prioritise by value, retire stalled pilots: Use stage gates with hard kill criteria. Scale what proves ROI in 90 days. Tie benefits to P&L owners and confirm savings with Finance.
- Rationalise vendors: Consolidate where it improves auditability and resilience. Require exportable telemetry, open standards support and negotiated exit clauses to avoid lock-in.
- Design your AI data repository strategy now: Define governed data domains, retention, residency and access patterns across cloud, edge and co-lo. Build for retrieval, reuse and compliance from day one.
Metrics to review weekly
- % of AI use cases in production (and meeting SLOs)
- Median time-to-production for new models/features
- Data issue escape rate to production and time-to-fix
- MTTR for cross-domain incidents; UC MTTR and satisfaction
- Tool count in observability; % coverage via OpenTelemetry
- Cost per inference and egress as % of AI spend
Why this matters now
As Riverbed's CMO Jim Gargan puts it, financial firms are sophisticated AI adopters under unique pressures-regulation, zero downtime, and data accuracy. The winners will simplify stacks, improve data quality, embrace open standards and ensure networks can carry the load. Strong returns are already on the table; consistency at scale is the new bar.
The survey covered 1,200 business decision-makers, IT leaders and technical specialists across seven countries and multiple industries, including financial services. Research was conducted by Coleman Parkes Research.
If your team is building capability in AI operations, governance and tooling for finance, see our curated tools list: AI tools for finance.
Your membership also unlocks: