Wealth managers back AI, but client trust is decreasing
Client trust in artificial intelligence is slipping, even as wealth managers double down on it. New research from Avaloq shows a widening gap between industry confidence and client skepticism that leaders can't ignore.
40% of wealth managers now believe clients will never trust AI in investment advice, up from 24% in 2024. The same share say clients will never trust AI in financial planning, up from 28%. Yet 87% still see AI as integral to the future of their work, and 82% expect benefits for the wider industry. The survey captured views from over 400 wealth managers worldwide.
The disconnect to solve
Clients don't reject AI across the board-they reject black boxes, unclear accountability, and the risk of impersonal advice. Market volatility and headlines about AI mistakes amplify that unease. Meanwhile, managers see real gains in speed, personalization, and decision support.
As Avaloq's UK managing director Suman Rao puts it: "While wealth managers see AI as integral to the future of their work and the industry, many clients are unconvinced about its role in investment decisions and financial planning... The human touch will always be important to clients... embedding transparency, accountability and strong human oversight into AI-driven solutions."
What leaders should do now
- Declare the rules of use: Publish a simple policy on where AI is used (research, monitoring, planning scenarios), where it isn't (final discretion), and who is accountable (the adviser and the firm).
- Human-in-the-loop by default: Require adviser review and sign-off for any AI-informed recommendation. Make "human override" easy and routine.
- Client consent and choice: Offer opt-in by use case, plus a clear opt-out. Log preferences in the CRM and reflect them in workflows.
- Explainability at point of advice: Show the "why" behind outputs-assumptions, data ranges, and key risk drivers-in plain language.
- Red-line sensitive areas: Prohibit AI from setting risk profiles, suitability decisions, or executing trades without human approval.
- Audit trails: Capture prompts, model versions, data sources, overrides, and rationales. Retain evidence for compliance reviews.
- Model risk management: Test for bias, drift, and hallucinations. Calibrate thresholds for confidence and automatically flag low-confidence outputs.
- Data governance: Minimize personal data, restrict sensitive attributes, and control vendor access. Align with your privacy notices.
- Incident response: Define escalation paths for AI-related errors, with client notifications and remediation standards.
- Advisor training: Coach teams to challenge AI, communicate limitations, and document judgment.
How to talk about AI with clients
- Be transparent: "We use AI to surface options faster and stress test scenarios. A qualified adviser makes the final call."
- Be specific: "AI screens thousands of data points; we verify, adjust, and explain the recommendation in your context."
- Set boundaries: "AI never replaces your risk profile or executes trades. It supports our research and monitoring."
- Invite control: "You can opt out of any AI-supported step-no impact on service quality."
Governance must-haves
- Board-level ownership: Assign clear responsibility for AI strategy, risk appetite, and oversight.
- Use-case registry: Inventory every AI use, mapped to risks, controls, and KPIs.
- Third-party diligence: Assess vendors for data handling, security, training data provenance, and model transparency.
- Policy alignment: Link AI controls to suitability, conflicts, product governance, and Consumer Duty outcomes.
For regulatory context, see the FCA's Consumer Duty guidance here, and a practical risk framework from NIST here.
Metrics that matter
- Trust indicators: client opt-in rates, complaints mentioning AI, consent reversals.
- Quality signals: adviser override rates by model/use case, reasons for overrides, error rates.
- Outcome measures: suitability findings, dispersion of outcomes for similar profiles, time-to-recommendation.
- Control health: model drift alerts closed on time, data access exceptions, vendor SLA breaches.
90-day action plan
- Days 0-30: Catalog AI use cases, set red lines, draft client disclosures, and implement basic logging.
- Days 31-60: Pilot with opt-in clients, train advisers on review and documentation, and add explainability to advice packs.
- Days 61-90: Stand up model risk checks, formalize incident response, and report trust/quality KPIs to the exec team.
The bottom line
Clients don't need AI to be perfect-they need it to be clear, controlled, and accountable. Keep the human adviser firmly in charge, prove it with governance, and make transparency part of the experience. Do that, and the efficiency gains won't come at the cost of trust.
For more practical playbooks on setting AI guardrails and executive governance, see AI for Executives & Strategy.
Your membership also unlocks: