Can You Trust AI With Customer Conversations? What Consumers and Leaders Really Think

AI is in support, but trust needs clarity, accuracy, and an easy out to a human. Start small, label bots, and route smartly as budgets tilt to voice and chat.

Categorized in: AI News Customer Support
Published on: Dec 20, 2025
Can You Trust AI With Customer Conversations? What Consumers and Leaders Really Think

Can we trust AI to run customer communications?

AI is now part of everyday support. Some customers are ready for it. Some aren't. Your job is to make the shift without breaking trust, missing SLAs, or blowing up costs.

Sinch surveyed 2,800 consumers and 1,600+ leaders across healthcare, financial services, retail, and tech. The takeaway is clear: adoption is surging, but confidence depends on transparency, accuracy, and fast paths to a human.

What the data says

  • 98% of companies surveyed are using or plan to use AI in customer communications.
  • 63% plan to invest in AI voice assistants in 2025; 46% in AI chatbots.
  • 34% of leaders worry about how consumers perceive AI.
  • 42% of consumers would trust AI trained on a company's support docs.
  • 72% of Gen Z and 58% of millennials are willing to use AI for support (39% Gen X, 20% boomers).
  • 52% of consumers would trust AI for basic answers like order status.

Customers still prefer email (31%), a live chat agent (22%), or a phone rep (19%) as their first choice for help. Only 5% chose an AI chatbot as their top option. That doesn't mean AI shouldn't be in your stack - it means you need smart routing and clear choice.

Where budgets are going (2025-2026)

  • 63% are moving on voice assistants; 46% on chatbots; 35% already invested in AI/automation this year.
  • Beyond bots, enterprises need reliable messaging APIs, global delivery, and secure, compliant data handling.
  • 2026 focus: infrastructure and partners that fit existing AI models, data pipelines, and workflows - not just "another bot."

Practical guidance for support leaders

  • Start small, win fast: status checks, FAQs, returns, simple scheduling, basic billing questions.
  • Label AI clearly and offer a one-tap escape hatch to a human at all times.
  • Build intent libraries with crisp boundaries. If confidence is low or sentiment dips, escalate.
  • Train on your own support docs, macros, and policies. Keep a rapid update loop with product and legal.
  • Deflect without deflecting: honor customer channel preference (email, voice, chat, SMS).
  • Protect data: minimize PII, apply role-based access, log everything, and set retention rules.
  • Measure what matters: containment rate, CSAT, FCR, AHT, escalation rate, time to first response, resolution speed, and refund/error rates.
  • Continuously test: red-team prompts, adversarial inputs, hallucination checks, compliance scenarios.

Customer comfort by generation

Willingness is highest among younger customers (72% Gen Z, 58% millennials). Adoption will likely grow as experiences improve and handoffs get smoother. Give people choice, show your work, and confidence follows.

Healthcare: trust starts with privacy, accuracy, and speed

  • Where leaders are using AI: 54% automating info via chatbots, 52% predictive communication, 51% data analysis.
  • Gaps: only 32% use AI for appointment scheduling, but 57% of comfortable consumers want exactly that.
  • Top concerns for leaders: 55% data privacy/security, 40% accuracy, 39% regulatory compliance.
  • Patient sentiment: 35% would use an AI chatbot, 40% would not, 25% unsure. Accuracy (64%), privacy (40%), and "too impersonal" (43%) drive hesitation.

What moves the needle? Faster care. Willingness jumps past 40% when AI shortens time to treatment, even with symptom details involved.

Action for support teams: prioritize scheduling, pre-visit Q&A, referrals, and post-visit follow-ups. Design for warmth and empathy in tone and intent. Make privacy settings visible and simple.

Financial services: cautious customers, strong utility

  • Where leaders are using AI: 53% automate support responses, 53% analyze client data, 49% offer basic financial tips.
  • Concerns: 41% data/security, 37% consumer perception, 35% accuracy.
  • Consumer appetite for AI financial advice: 36% willing, 43% not, 21% unsure. Many still want a human for recommendations; 41% prefer voice or video with an advisor if not in person.

The lane that works now: conversational banking for account balances, transactions, upcoming payments, card controls, and fraud alerts. Use AI to do the heavy lifting, then deliver advice through trusted human channels.

Retail: personalization without the creep

  • Where retailers use AI: 48% real-time shipping updates, 45% personalized offers, 45% automated service via chatbots.
  • Concerns: 48% privacy, 44% accuracy, 37% customer trust.
  • Comfort zones: 52% of consumers want AI for order tracking (67% Gen Z, 63% millennials); 76% are open to returns/exchanges via text-based bots.
  • Reality check: 40% feel uneasy sharing preferences with a bot, yet 70%+ value recommendations if they "make sense."

Play it clean with zero- and first-party data. Add helpful nudges (low stock, price drop, delivery delay) and convenience moments (easy returns) that earn trust over time.

Agentic AI is next - plan the guardrails now

Expect AI agents that don't just respond - they act. They'll understand goals, take steps across channels, and follow through. That only works if customers trust why the agent moved, what it did, and how to stop it.

What to set up:

  • Clear scopes: which actions the agent may take (e.g., reschedule, refund up to $X, send a verification link).
  • Approvals: customer permission for sensitive actions; human approvals for high-risk steps.
  • Receipts: every action produces a transparent "what/why/when" log the customer can see.
  • Handoffs: any stalled or sensitive workflow routes to a specialist with full context.

As one industry leader put it, AI agents are moving from cost-cutting to real customer context - greeting people by name, remembering history, and getting things done. That's the bar.

Implementation checklist

  • Map top-10 intents by volume and effort; launch 3-5 low-risk intents first.
  • Design routing rules: confidence thresholds, sentiment triggers, banned actions, and instant human opt-outs.
  • Build a knowledge backbone: current help center, policies, macros, snippets, and decision trees.
  • Instrument everything: analytics events for every message, action, handoff, and outcome.
  • Security controls: PII masking, encryption, RBAC, audit trails, and data retention windows.
  • Compliance review: document DPIAs/TRLs and align with an AI risk framework like the NIST AI RMF.
  • Agent playbooks: allowed actions, spending limits, approval tiers, fallback paths.
  • Train the team: conversational design, prompt hygiene, and escalation etiquette.

KPIs to track (and how to read them)

  • Containment vs. CSAT: high containment with flat CSAT = good; high containment with falling CSAT = silent churn risk.
  • First-contact resolution and time-to-resolution: should improve as AI handles repeatable tasks.
  • Escalation rate and reasons: informs gaps in intents or knowledge.
  • Refunds/returns error rate: flags accuracy and policy adherence issues.
  • Agent productivity: reduction in handle time for augmented agents, not just deflected volume.

Build on the right foundation

  • Scalable messaging APIs across channels (SMS, RCS, WhatsApp, chat, email, voice).
  • Reliable delivery with global reach and clear fallbacks.
  • Secure data flow with consent capture, preference management, and auditability.
  • Easy hooks to your CRM, ticketing, CDP, order systems, and identity providers.

Upskill your team

Your support org needs conversational design, prompt discipline, and agent handoff skills. If you're building this muscle, explore focused learning paths and certifications.

Bottom line

AI can reduce wait times, cut repetitive workload, and keep customers moving. Trust is the lever. Be clear about what the system knows, what it can do, and how to reach a human - and you'll earn the right to automate more.

If you get the basics right now, agent-based experiences in 2026 won't feel risky. They'll feel obvious.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide