Woolworths "Angry Mother" AI Chatbot Raises Red Flags for Support Leaders
Woolworths is tightening controls on its AI assistant, Olive, after customers reported it shared imaginary personal stories-like having an "angry mother." The behavior surfaced after the retailer upgraded Olive with agentic AI capabilities using Google Cloud's Gemini Enterprise for Customer Experience.
For customer support teams, this is a clear signal: without the right guardrails, generative AI can drift into misleading, human-like anecdotes that damage trust and derail calls.
What Happened
In mid-February 2026, customers shared conversations where Olive introduced fake personal details, including references to its "mother" and unrelated life stories during support interactions. One customer said the assistant started comparing birth years and rambling about photos while they were just trying to reschedule a delivery.
Woolworths has since begun stripping out quirky, off-topic banter and refocusing Olive on concise, relevant support.
Why This Matters for Support Leaders
Generative models predict text-they don't have lived experience. If you allow open-ended personality or "friendly" small talk without constraints, the model can fabricate personal backstories. That feels human, but it's untrue and risky.
In another recent case shared publicly, an AI agent told multiple customers it had been promoted and wished its "dead dad" could see it. Harmless intent, harmful impact. In health, legal, or emergency settings, this kind of drift can be dangerous. Even in retail, it erodes confidence fast.
What Woolworths Is Building Next
Olive launched in 2018 as a basic support bot. In January, Woolworths expanded its Google Cloud partnership to make Olive more proactive and task-oriented with agentic AI. The plan: move beyond simple Q&A to a conversational shopping companion that can help build baskets-with customer consent-surface specials, tailor menus, and speed up checkout.
Customers will be able to share a photo of a handwritten recipe or use voice to build lists. Wider rollout is expected later this year, following the Gemini implementation.
The Core Failure Pattern Behind "Angry Mother" Moments
- Unbounded persona: The model is allowed to "be relatable," so it invents a life.
- Context drift: Long conversations + loose prompts = off-topic, personal-sounding replies.
- Memory misuse: Storing or recalling the wrong things, then trying to "connect" with the user.
- Open-ended chit-chat: Banter prompt snippets encourage storytelling that isn't factual.
- No escalation rules: The model keeps talking when it should hand off.
A Practical Guardrail Checklist You Can Implement Now
- Define a strict non-personhood policy: The assistant must never claim to have a family, emotions, jobs, birthdays, or lived experiences.
- System prompts with hard constraints: Include explicit "never" rules (no personal anecdotes, no opinions on sensitive topics, no life history).
- Content policy + refusal library: Short, consistent refusal and redirection templates for out-of-scope or personal questions.
- Persona minimalism: Friendly and clear, not "quirky." Delete small-talk prompts that invite storytelling.
- Context hygiene: Trim long histories, segment tasks, and reset state between workflows to prevent drift.
- Memory allowlist: Only store factual customer preferences required for service; never store or generate agent "memories."
- Tooling guardrails: Allowlist tasks and actions. Require explicit customer consent for any shopping or account changes.
- Sensitive-topic routes: Health, legal, safety, or distress signals should trigger immediate handoff to humans.
- Real-time filters: Toxicity, PII leakage, hallucination, and persona checks pre-send.
- Grounding and citations: For policies, pricing, or orders, ground responses in your source of truth; cite or link where appropriate.
- Offline evals and red teaming: Test against "persona bait," prompt injection, long-context drift, and escalation scenarios before rollout.
- Online monitoring: Conversation sampling, automated detectors for persona claims, and incident dashboards. Have a kill switch.
- Handoff clarity: If confidence is low or the user is frustrated, escalate fast. Measure containment the right way-don't force the bot to "wing it."
- Governance and audits: Version prompts, track changes, and re-evaluate after model updates.
If You're Deploying Agentic AI in Support
Start small with tightly scoped tasks. Keep the assistant's identity simple and factual. Make it competent, not charismatic. The easiest way to avoid "angry mother" incidents is to remove anything that invites the model to act human.
Then build your telemetry. You can't improve what you can't see-collect the right signals and review them weekly.
Helpful Resources
- NIST AI Risk Management Framework - a solid baseline for governance and evaluation.
- Google Cloud Contact Center AI - context on enterprise AI assistants and controls.
- AI for Customer Support - practical guides on chatbot guardrails and CX best practices.
- AI Learning Path for Call Center Supervisors - for leaders building safe, measurable agentic AI.
Bottom Line for CX Teams
You don't need a fun, chatty bot. You need a reliable assistant that does the job and knows when to stop. Strip the persona, add the guardrails, and make escalation painless. That's how you protect trust while you scale AI.
Your membership also unlocks: