Notch deploys AI agents for regulated customer support with safety guardrails

Notch's AI agents can process refunds and update accounts in regulated industries, but only after a human approves each action. Every decision is logged and timestamped, giving compliance teams a clear audit trail.

Categorized in: AI News Customer Support
Published on: Mar 24, 2026
Notch deploys AI agents for regulated customer support with safety guardrails

Notch Deploys AI Agents in Customer Support Without Sacrificing Safety

Notch, a startup building action-taking AI agents, has figured out how to let these systems actually do things-like processing refunds or updating account information-in regulated industries where mistakes carry real consequences.

The company works in customer support, where AI agents need permission to take actions that affect customers directly. This creates a tension: agents that merely suggest solutions don't reduce workload, but agents with full autonomy risk errors that damage customer trust or violate compliance rules.

Notch's approach centers on structured oversight. The system flags decisions for human review before execution, allowing support teams to catch errors without bottlenecking every interaction. This matters in finance, healthcare, and other sectors where regulators expect humans to maintain control over consequential decisions.

Why This Matters for Support Teams

Customer support managers face pressure to reduce ticket volume while maintaining quality. Traditional chatbots frustrate customers by refusing to act. Notch's agents can resolve issues end-to-end-but only after a human approves the action.

The startup also builds audit trails into its system. Every agent decision gets logged, timestamped, and tied to the human who approved it. This documentation satisfies compliance teams and protects companies if regulators ask questions later.

The Technical Reality

Notch relies on large language models to understand customer requests and determine appropriate responses. The critical difference: it doesn't let the model execute actions autonomously. Instead, the agent generates a proposed action, presents it to a support representative, and waits for approval.

This human-in-the-loop design trades some efficiency for safety. A support representative still reviews each proposed action. But the review happens faster than handling the entire ticket from scratch, and the agent handles routine cases that would otherwise require manual work.

For teams implementing AI for Customer Support, the lesson is clear: action-taking capability requires guardrails. Understanding how to build those guardrails-and how to document them for compliance-separates systems that work in regulated environments from those that don't.

Support leaders evaluating AI Agents & Automation should ask vendors directly about approval workflows, audit logging, and how their systems handle edge cases where the model's proposed action might be wrong.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)