Training for an AI-driven insurance market: why brokers can't let fundamentals fade
AI is now core infrastructure across Australia and New Zealand insurance. Global surveys put the sector past "experimentation," with adoption moving at scale. One marker: in Australia, AI-related job ads jumped from ~2,000 in 2012 to 23,000 in 2024, and 11.8% of finance and insurance postings now ask for AI skills. Source: PwC AI Jobs Barometer.
The tools are getting faster. Claims triage, extraction, and routing keep improving. But the craft of reading risk doesn't improve by itself. That's the fault line.
AI can't replace the craft of risk
"I think it won't replace jobs as much as it will make people better at their jobs, and the ones who don't embrace it may fall behind," said Michael Lewis, cyber development manager for Australia at CFC. The gap isn't between humans and machines; it's between teams that build AI into their workflows and those that don't.
Lewis' deeper concern is training. Junior underwriters used to learn by osmosis: reading proposals, dissecting wordings, sitting in on claims disputes, and watching judgment calls. As AI automates the rote work, that learning surface shrinks.
His prescription is simple: teach risk fundamentals on purpose, and teach AI as a tool. New entrants should learn both, early, and repeatedly.
Still a people business: what brokers must protect
"Insurance is still a people business; you still need your reputation, you still need all the relationships," said Trent Nihill, Coalition's general manager for Australia. AI can scan more data in seconds than a team can in days. It still can't build trust.
That matters because customer trust in AI is mixed. The broker who can explain an AI-assisted decision, challenge a model-driven declinature, and translate outputs into plain language will gain leverage. The broker who simply relays what the system says is replaceable.
What good training looks like now
- Teach fundamentals explicitly: policy construction, wordings, causation, aggregation, local regulation, and claims behavior in Australian and New Zealand conditions.
- Build AI literacy that serves the craft: how models work, data lineage, failure modes and bias, prompt discipline, confidence thresholds, and when to escalate to human review.
- Preserve apprenticeship in automated workflows: scheduled shadowing with senior underwriters, manual "case days," post-bind retros, and structured claims file reviews.
- Broker-specific value adds: interrogating model decisions, producing plain-language explainers for clients, proposing alternative structures, and negotiating with context.
Practical moves for insurers and brokerages
- Rebuild early-career pathways with rotations across underwriting, claims, and broking.
- Design workflows so juniors still touch edge cases and see how decisions are made.
- Shift KPIs from speed alone to quality: judgment, file quality, client clarity, and challenge rate on model outputs.
- Embed feedback loops: when humans override a model, capture why and feed it back into training sets and playbooks.
- Tighten governance: maintain audit trails for prompts, data sources, and decisions; document model limitations in plain language.
- Prepare client communications: short templates that explain AI-assisted decisions without jargon.
Guardrails: what AI should not own
- Final declinatures on borderline submissions without human review.
- Policy wording changes, endorsements, and exclusions that impact coverage intent.
- Causation calls in disputed or complex claims.
- Aggregation logic for specialty and catastrophe-exposed portfolios.
- Pricing or appetite for novel or fast-shifting risks where data is thin (e.g., certain cyber exposures).
Metrics that matter
- Time-to-competency beyond system operation (wordings, causation, negotiation).
- Percentage of files where the broker or underwriter challenged a model output-plus outcomes.
- Loss ratio and leakage trends: AI-first vs. human-reviewed cohorts.
- Client clarity scores on AI-assisted explanations.
- Audit exceptions tied to model use and how quickly they're resolved.
Australia and New Zealand context
A joint CSIRO-Insurance Council of Australia report argues AI is set to reshape fraud detection and complex claims management locally. New Zealand insurers are moving too, applying AI in fraud, pricing, and operations. The tech will spread. The craft will only spread if we teach it.
The job market confirms the shift. With AI skills now common requirements in finance and insurance roles, teams that upgrade training will compound advantages faster than those that wait.
Where to upskill (without losing the craft)
- Pair internal training on risk fundamentals with short, applied AI modules tied to underwriting, claims, and broking tasks.
- Use external courses to standardize language and accelerate baseline skills. For curated options by role, see Complete AI Training - courses by job.
The test isn't how fast you deploy another model. It's whether your brokers and underwriters can explain, interrogate, and improve AI-assisted decisions-while keeping the fundamentals of risk front and center.
Your membership also unlocks: