ACCC increases AI scrutiny as insurers expand agentic use
The ACCC is turning up the heat on AI as insurers and other financial services firms roll out agentic tools across underwriting, pricing, distribution, and claims. The message is clear: AI can improve service, but it also creates real risks for competition and consumers. If you run customer support, your workflows, scripts, and vendor choices are now in the spotlight.
"AI-enabled products and services are growing more and more important to consumers and businesses across Australia," ACCC chair Gina Cass-Gottlieb said. "However, they also come with risks of potential harms to consumers and competition."
Why this matters for customer support
AI is being integrated into the big platforms your team already uses. That can raise switching costs, centralise data, and limit choice if a few vendors control critical features. Support leaders need a plan for transparency, consent, and auditability before scaling more automation.
Key risks the ACCC flagged
- Data use without clear consent: 83% of surveyed Australians think companies should ask before using personal data to train models, yet much training happens behind dense privacy policies.
- Misrepresentations in content: AI-generated copy and visuals can overstate product features, seed "ghost websites," and dress up listings beyond reality.
- Fake reviews and smarter scams: Generative tools can flood platforms with convincing reviews and phishing content that slip past basic filters.
- Agent behavior risks: AI agents may act in ways that look like coordination or collusion, even if no one coded it that way.
For support teams, that translates into higher stakes for what your AI says to customers, how you collect and store data, and how you detect scammy behavior before it hits your queue.
What insurers are actually doing with agentic AI
According to Capgemini, 20% of insurers are piloting AI agents and 12% have partial deployments. Only 2% have fully scaled agents. The most common uses: customer service triage, decision support, and workflow routing across claims and underwriting.
Leaders see upside-93% think scaling agents in the next year could provide an edge-but trust is low. Just 4% fully trust AI agents today. That gap explains the push for controls, human checkpoints, and clearer policies.
Practical actions for support leaders
- Disclose AI use in chat, email, and IVR. Offer an easy route to a human. Keep the language short and plain.
- Consent-first data practices: Add a concise notice that explains data collection and whether transcripts or feedback train models. Make opt-outs workable.
- Guardrails for AI replies: Block unverified claims, set safe default language for pricing/coverage, and attach dynamic disclaimers where needed.
- Scam-aware workflows: Use detection for spoofed domains and risky phrases. Auto-flag suspicious requests; warn customers in-session.
- Review integrity: Score and down-rank likely fake reviews. Require proof of purchase for public feedback where possible.
- Vendor risk controls: Map which providers touch support data. Plan exits, ensure data portability for transcripts, and review model retraining terms.
- Human-in-the-loop: Set thresholds for escalation on refunds, coverage changes, and claim decisions. No auto-approvals without checks.
- Audit everything: Log prompts, agent actions, and content changes. Require approvals for high-impact actions.
- Red-team your assistants: Test for hallucinations, leakage, and bias. Use retrieval to ground responses in approved knowledge.
- Tighten marketing/KB governance: If AI drafts content, add a compliance review step before publishing.
- Incident playbooks: Prepare for AI outages, bad outputs, or scam surges. Assign owners and SLAs.
- Measure what matters: Track CSAT alongside complaint rates, misrepresentation errors, and scam interception rates.
Regulatory direction to watch
The ACCC's AI snapshot sits inside a multi-year inquiry into digital platform services and backs a formal monitoring role for emerging tech under a new digital competition regime. Expect service-specific codes for major platforms and tougher expectations around transparency and consent.
Translation for support: clearer disclosures, better consent flows, and evidence you can produce on demand. AI-generated misrepresentations and opaque data use are likely enforcement priorities.
A simple 30-day plan
- Week 1: Map every AI touchpoint in support. Identify data flows and add upfront disclosure text.
- Week 2: Update scripts and bot prompts. Block risky claims and enable human handoff.
- Week 3: Sandbox agentic use cases with approval gates and full logging. Run failure drills.
- Week 4: Ship an internal report to risk/compliance. Train frontline teams on scams and AI-safe handling.
If you want background on policy signals, see the ACCC's Digital Platform Services Inquiry and the Treasury's work on a digital competition regime consultations.
Upskilling your team on safe, effective AI is the fastest lever you control. Explore practical programs for support roles at Complete AI Training - Courses by Job or get hands-on with assistant workflows in the ChatGPT certification.
Your membership also unlocks: