How Smarsh built an AI front door for regulated support - and hit 59% self-service adoption
Smarsh set a clear target: scale service with AI and lift productivity by 30%. The bigger issue wasn't headcount - it was customer friction across products, docs, and compliance steps. The fix was focus. One entry point. Plain language in, precise help out.
They called it Archie - an AI "front door" trained on Smarsh knowledge, built to cut through clutter and get customers to the right answer or action fast.
The shift: from a chatbot to a true front door
Traditional self-service makes users play twenty questions. Archie flips that. Customers describe the job to be done, and the agent routes, retrieves, or executes across connected systems. Less hunting. More doing.
Rohit Khanna, chief customer officer, summed up the goal: "How do we use what we already know and present it to customers in a way that makes our teams more efficient, and service more effective?"
Why a unified platform mattered
Smarsh built Archie on Salesforce's Agentforce 360 Platform for shared context, controlled execution, and orchestration. That choice reduced the "last mile" risk where pilots stall. Instead of gluing point tools, they used one stack to plan and complete work.
The outcome targets are specific: a 20% increase in self-service success rates, 25% faster resolutions versus search-and-browse, and a 30% productivity lift for reps. Platform consistency also supports the compliance rigor their customers require. Learn more about Agentforce here: Salesforce Agentforce.
Data trust: the non-negotiable
Effective AI starts with clean, secure, labeled data. Smarsh invested years preparing its knowledge: rationalized, annotated, anonymized, and locked down. That prep meant Archie could go to production instead of getting stuck in "pilot forever."
Janine Deegan, digital support program manager, worked with Salesforce admins to connect documentation directly to Agentforce, backed by the Salesforce Trust Layer. The documentation and AI teams now operate as one loop: docs ship, AI validates, and only then is content opened to the model. Quality and access are managed at the source.
Compliance and model risk management
Smarsh supports financial institutions where data custody and identity controls are strict. Banks and regulators ask for model risk management (MRM): what model, what data exposure, what controls. Smarsh partnered with Salesforce to provide the documentation customers need for MRM reviews and audits.
If you need a primer for your own MRM packet, the supervisory framework used by banks is a helpful reference: OCC SR 11-7/OCC 2011-12.
Adoption: personalization did the heavy lifting
Early on, some customers weren't sure how to use the new text box. The team adjusted: clearer change management, examples in the UI, and guidance that said, "Ask in natural language."
Personalization turned the corner. With responses tuned to product, entitlement, and history, adoption climbed to 59%. The plan now is to extend Archie across more products with the same playbook.
What support leaders can copy
- Define the front door. One entry point for all issues. Natural language first. No forced trees.
- Choose one platform for orchestration. Minimize glue code. Centralize identity, logging, and permissions.
- Get the data right before the pilot. Source of truth, access policies, and versioned docs. Clean beats clever.
- Fuse docs and AI teams. Treat documentation as a product. Ship, verify, expose to the model, measure usage, iterate.
- Build guardrails upfront. Role-based access, PII redaction, retrieval policies, escalation paths, and human-in-the-loop for high-risk actions.
- Prepare your MRM packet. Model lineage, evaluation methods, monitoring, incident response, and vendor docs ready for audits.
- Coach the customer. Inline examples, microcopy, and empty-state prompts. Show two or three "great questions" on load.
- Personalize responses. Use product context, entitlement, and prior cases to tune answers and next steps.
- Instrument everything. Track deflection, self-service success rate, time to resolution, containment, CSAT, and escalation reasons.
- Close the loop weekly. Review failed prompts, fix content, update tools, and retrain the agent on what changed.
Practical setup checklist
- Knowledge sources: Product docs, release notes, runbooks, known errors, policy and compliance guides.
- Access controls: Enforce data segmentation by account, product, and role. Log every retrieval and action.
- Retrieval quality: Deduplicate, chunk with headings, add metadata, and test recall/precision on real tickets.
- Action tools: Case creation, entitlement checks, status lookups, config diagnostics, safe changes with approvals.
- Safety gates: Thresholds for low-confidence answers, auto-escalation, and human approval for sensitive steps.
- UX cues: Example prompts, quick actions, progressive disclosure, and clear "handoff to human" paths.
- Monitoring: Drift alerts, hallucination checks via eval sets, and red-team prompts focused on compliance.
Smarsh's early results and targets
- 59% self-service adoption after improving personalization and change management.
- Expected: 20% increase in self-service success rates.
- Expected: 25% faster issue resolution compared to search-and-browse methods.
- Expected: 30% productivity boost for service reps.
Bottom line
The lesson is simple: an AI agent isn't another widget in your help center. It's the front door. One place to ask, decide, and act - backed by clean data, clear controls, and a platform that can finish the job.
Building this inside your org? Start here: AI Learning Path for User Support Specialists. For more ideas across helpdesk automation and self-service design, explore AI for Customer Support.
Your membership also unlocks: