Fiduciary managers are at an early stage of AI adoption
3 February 2026
Most fiduciary managers now use AI, but it's anchored in low-risk operations rather than investment decision-making. New research from Isio, covering 13 firms managing £241bn, found that 85% have adopted AI in some form. The focus is efficiency, not alpha.
Where AI is actually being used
- 91%: drafting internal memos and summarising commentary
- 73%: reviewing legal documents
- 64%: coding and automation support
- 9%: any use in trading client assets
That 9% tells the story: AI is supporting workflows, not calling the shots. Adoption is about speed, consistency, and cost, not taking discretionary risk.
Strategy maturity is still light
- 23% have a detailed AI strategy
- 62% have early-stage plans to integrate AI more fully
- 8% have no strategy and no plans
In short, the industry is testing, learning, and tightening controls before pushing AI into higher-impact processes. As one senior leader at Isio put it, the focus today is on augmenting existing processes so clients feel the benefit through cost savings.
Investment processes: where AI may add value next
While fiduciary managers aren't letting AI make independent trades, some underlying funds already do-especially in quantitative and high-frequency strategies. Over time, AI could help with fund selection and manager research, where firms hold rich comparative data. The hardest nut to crack remains portfolio construction, where context, risk, and judgment dominate.
Isio notes that new AI capabilities are close, which could drive deeper integration and influence how decisions are made over broader datasets. Sensible caution remains: there is no magic solution to make everyone rich.
What management teams should do now
- Set governance that scales: define an AI policy, accountability (business owner + model risk), and approval thresholds for new use cases.
- Map use cases by risk: prioritise high-volume, low-risk tasks; require human oversight for anything touching client portfolios or compliance.
- Lock down data: control prompt and output data flows, redact sensitive information, and align with privacy rules. See the NIST AI Risk Management Framework for a practical structure.
- Strengthen model risk management: document model purpose, limits, testing, monitoring, and change logs. Keep an audit trail for every material decision.
- Tighten vendor diligence: review security, IP terms, data retention, bias testing, and incident response. Ensure you can export logs and evidence.
- Pilot in research, not trading: start with manager screening, document synthesis, fee benchmarking, and operational automation. Measure accuracy, throughput, and cost before scaling.
- Upskill your team: train analysts and ops on prompts, review protocols, and failure modes. Curate tools for finance use cases, e.g., AI tools for finance.
Signals to watch in 2026
- Growing use of AI inside underlying funds, especially systematic strategies
- Movement from document tasks to fund screening and attribution support
- Clearer regulatory expectations for AI governance in financial services (for updates, track the FCA's AI updates)
Bottom line: treat AI as an operations co-pilot today and build the governance, data, and skills to move up the value chain later. The firms that prepare now will be ready when AI is strong enough to matter in selection and construction-without compromising control.
Your membership also unlocks: