Macquarie Bank rolls out AI agent "Q" with async human hand-off: what support leaders can take from it
Macquarie Bank has launched Q, an AI agent inside its mobile app and online banking. It handles everyday questions like payment limits and transaction timings, and it does that around the clock. The standout move: when Q can't help, it hands off to a human using asynchronous messaging-so customers don't sit stuck in a live chat queue.
For support leaders, this is a clear signal. The bar for "fast, personal, low-friction" service is moving from scripted bots and live chat to smart agents with clean escalation paths.
What makes Q different
- Asynchronous escalation: If Q hits a boundary, it moves the conversation to a message thread. Customers can leave and return when the agent replies-no waiting room, no broken session.
- Personalised help: Answers are context-aware and tied to common banking tasks. The design focus is less on novelty and more on getting the customer to clarity quickly.
- Clear boundaries: Q doesn't pretend to do everything. It routes to people when needed, preserving trust and speed.
Roadmap: from reactive answers to money insights
Macquarie plans to grow Q from simple Q&A to personalised insights. Think "How much did I spend on groceries this year?" or "How long will it take to save for a holiday?" with predictive estimates.
The bank says it will iterate based on real customer feedback. That feedback loop is where most teams either compound value-or stall.
Security baked in
Macquarie highlights two-factor authentication and encryption to protect conversation data. For any support org rolling out an AI agent, this level of protection is table stakes.
The broader shift across Australian enterprises
Australia's major banks are pressing ahead with AI. Commonwealth Bank has applied generative AI to scam detection and engagement, and uses machine learning for bill prediction and spend control. National Australia Bank uses AI for home loan assessment, software development, and fraud detection. Westpac has run its Wendy chatbot and is partnering with Accenture on AI agents across product and service.
Outside banking, Woolworths is boosting its digital shopping assistant with Google Cloud's agentic AI. Canva is training staff on low-code AI and agentic tools. Suncorp's chatbots handle around two million conversations a year. Still, research indicates many Australian organisations lag their regional peers, putting significant economic upside at risk, with common blockers like weak governance, shadow AI, limited training, and unclear ROI measures. See Cisco's AI Readiness guidance for context: AI Readiness Index.
A practical playbook to test async hand-off in your support org
- Start with 15-25 top intents: Payment issues, order status, account access-whatever drives the most inbound volume.
- Define hard guardrails: Topics the agent must deflect or escalate (identity issues, legal complaints, complex billing disputes).
- Design the async flow: Clear "We're handing this to a specialist" messages, expected response times, and seamless return-to-thread on mobile and web.
- Route intelligently: Skills-based routing with ownership until resolution. No bouncing between queues.
- Instrument everything: Time to first response (bot and human), resolution time, deflection rate, containment, and CSAT by path.
- Prep agents: Short prompts, conversation summaries, and suggested replies. Train on escalation criteria and tone in async contexts.
- Feedback loop: Daily review of failed intents, confusing answers, and long waits. Fix the knowledge, not just the ticket.
- Compliance and privacy: Keep sensitive data masked, log consent, and align with internal governance.
- Set expectations with customers: Publish typical response windows and how to re-open a thread. Consistency beats speed promises you can't keep.
Metrics that matter
- Containment rate: % resolved by the AI without human help (with no increase in recontacts).
- Resolution time: Bot-only, human-only, and blended (bot + human).
- Time to first human response: Critical in async flows-set an SLO per queue.
- Queue health: Backlog age, message re-open rate, and time-in-state.
- Quality and trust: CSAT by path, accuracy audits on AI answers, and complaint rate.
- Cost per resolution: By intent and by channel, to track ROI honestly.
Common pitfalls to avoid
- Over-automating sensitive issues or anything with high financial risk.
- Async without strong SLOs-conversations linger, frustration climbs.
- Opaque hand-offs-customers need to know who owns their case and when to expect a reply.
- One-and-done training-models improve, policies change, and agents need refreshers.
- Chasing vanity metrics-track outcomes, not just message counts.
What to do next
Pick a narrow slice of intents, ship an async hand-off path, and learn fast. The goal isn't a perfect bot-it's a smoother path to resolution with less waiting and fewer dead ends.
If you're building skills for your team, explore role-based AI training and certifications for support operations here: Courses by job and Popular certifications.
Bottom line
Q shows where customer support is heading: instant answers when possible, graceful escalation when not, and a feedback loop that improves with use. The teams that win will pair smart automation with clear promises and strong human follow-through.
Your membership also unlocks: