Sierra's AI Resolves 70% of OluKai Support With 4.5 CSAT, Freeing Agents for Complex, High-Value Work

OluKai's support now auto-resolves about 70% of inquiries with Sierra's AI while keeping CSAT at 4.5/5. Bots handle returns and routine questions; humans take the tricky stuff.

Categorized in: AI News Customer Support
Published on: Jan 02, 2026
Sierra's AI Resolves 70% of OluKai Support With 4.5 CSAT, Freeing Agents for Complex, High-Value Work

OluKai scales support with Sierra's AI: 70% auto-resolution, 4.5/5 CSAT

Footwear brand OluKai reports that its "Day Makers" support team now resolves about 70% of customer inquiries autonomously using Sierra's AI-driven platform. Despite the automation, they're maintaining a reported 4.5/5 customer satisfaction score.

The AI handles structured tasks like return policy exceptions and educating customers on Happy Returns. Human agents stay focused on higher-complexity cases, where context, empathy, and judgment drive outcomes.

What's actually being automated

  • Return policy exceptions (clear rules, edge-case handling, and approvals where applicable)
  • Happy Returns education (guiding customers through the process and setting expectations)
  • Similar routine inquiries that follow policy and can be scripted without losing quality

Why this matters for support leaders

  • High containment without a major drop in CSAT suggests customers accept automation when it's fast and accurate.
  • Agents get time back for escalations, VIPs, and nuanced situations that impact loyalty.
  • Omnichannel coverage helps keep responses consistent across chat, email, and social.
  • Lower cost-to-serve and tighter SLAs without adding headcount.

How to apply this playbook

  • Map top intents by volume and value. Start with return flows and policy-driven questions.
  • Codify exception logic. Define thresholds for approvals, credits, and replacements.
  • Integrate returns tooling and content. If you use Happy Returns, sync policies and statuses so answers are live and accurate. Happy Returns
  • Set routing rules. Escalate on sentiment spikes, account flags, or unresolved multi-turn threads.
  • Measure fast and often: containment rate, CSAT gap vs. human, handle time, and recontact rate.
  • Create a feedback loop. Review transcripts weekly, tag failure modes, and update policies or prompts.
  • Train agents on "AI + human" workflows so handoffs feel seamless to the customer.

KPIs to watch

  • AI containment rate: share of tickets fully resolved by the AI agent
  • CSAT delta: compare AI-handled vs. human-handled interactions
  • Average handle time and first contact resolution
  • Deflection to self-service and recontact within 7 days
  • Escalation accuracy: are the right cases reaching humans at the right time?

Risks and how to mitigate

  • Policy drift: lock policies to a single source of truth and version changes.
  • Edge cases: define hard stops where AI must hand off; audit exceptions weekly.
  • Tone control: set style guides and test across channels to avoid robotic or overly casual replies.
  • Compliance and privacy: restrict data access and log decisions for review.

What this signals for ops and investors

Automating a large share of support without degrading satisfaction points to real efficiency gains. If similar outcomes repeat across more brands, it supports recurring revenue growth, higher retention, and a stronger position among AI-powered customer experience platforms.

For support teams, the takeaway is simple: focus AI on clear, policy-driven work, protect the edge cases with smart routing, and constantly tune the system based on actual conversations. That's how you keep quality high while scaling volume.

Helpful resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide