AI isn't coming - it's here. What that means for insurance professionals today
Insurance has always dealt in uncertainty. This year's Chartered Insurance Institute (CII) conference made something certain: AI isn't a future project - it's changing your job now.
CII chief executive Matthew Hill captured the mood as "tempered optimism." The room was engaged because the questions hit close to home: What changes first? How fast? And what's left for humans when machines do the grunt work in seconds?
A profession paying closer attention
This wasn't a box-ticking CPD day. Attendees leaned in because the implications are personal and immediate. Hill noted the shift: people weren't just asking what AI means for the sector - they were asking what it means for their careers, their teams, and their families.
The takeaway: this is not theory. It's operational. It's client-facing. And it's rewiring how value is created.
AI is already changing day-to-day work
Hill shared a visit to a large insurer where the complaints team replaced hours of human review with instant AI call summaries. The tool pulled key facts faster and more accurately, freeing staff to spend time with customers instead of transcripts.
Underwriting, broking, and claims are seeing the same pattern. Policy wording checks, first-notice-of-loss details, market comparisons - the tasks that used to justify head-down effort are being done in seconds. The value shifts from information delivery to interpretation and judgement.
As Hill put it, "Clients will still need someone they trust to help them make sense of this. AI doesn't replace listening - it should free us up to do more of it."
The uncomfortable question
Hill urged a sharper lens: "What will I do when AI becomes a better solution to the problems that insurance was created to solve?" That's not a threat - it's a prompt to redefine your edge.
If admin, extraction, and standard comparisons are handled by machines, your leverage becomes advice, structure, governance, and trust. In short: less typing, more thinking.
Professionalism is the sector's edge
Technology will change every quarter. Principles shouldn't. "Ethics and values are transcendent and persistent," Hill said. Whether delivered by hand or by machine, the standard is the same: treat clients with fairness and respect.
The CII's stance isn't to draft a new rule for every tool. It's to instil a way of thinking. Ask before you deploy: What message does this send? Does this build trust or just chase margin? That mindset is what separates durable firms from those that commoditise themselves.
For reference, the CII's Code of Ethics is a useful anchor when pressure mounts. Review it here.
Don't underestimate what's coming
Hill's blunt view: most people still underestimate how much everyday roles will change. It's hard to picture the step-change from horse and cart to high-speed rail until it's in front of you. That lack of imagination is human - and risky.
Education and regulation will lag. "This is moving so fast that everyone is going to be late all of the time," Hill said. Bureaucracies aren't built for this speed. Your response has to be practical and near-term.
Entry routes are changing - ask "entry to what?"
Many worry about entry-level roles disappearing. If the profession itself is being redefined, the question changes: entry to what? Less paper-pushing, more client judgement and controls. Less rekeying, more scenario thinking and stewardship.
That can be good news for careers - if firms create clear learning paths for decision-making, data fluency, and ethical use of AI.
Your 90-day action plan
- Audit the work: List recurring tasks in underwriting, broking, and claims. Tag each as automate, augment, or human-only. Be specific.
- Ship two quick wins: Pilot call summarisation for complaints and document review for policy wordings. Measure time saved and error rates.
- Draw decision boundaries: Write down what AI can propose vs. what humans must approve. Make escalation rules explicit.
- Tighten data governance: Check data sources, retention, and redaction. Align with UK GDPR and internal policies. Useful primer: ICO guidance on AI.
- Retrain for judgement: Run sessions on interpreting model outputs, asking better questions, and explaining decisions to clients.
- Update client comms: Be clear where AI is used, how quality is checked, and how to reach a human. Trust compounds when you're transparent.
- Measure what matters: Track cycle time, complaint themes, leakage, and NPS by process. Review monthly. Adjust or roll back where needed.
- Keep humans in the loop: Require human sign-off for high-impact or ambiguous cases. Document rationale, not just outcomes.
How to stay valuable
Be the interpreter, not the courier. Summaries are cheap. Context and judgement are scarce. Build your craft around the latter.
Specialise in client outcomes. Translate model outputs into risk decisions clients can act on. Speak clearly. Avoid jargon.
Be the ethical backstop. Know where models can drift and where bias can creep in. If you can explain the "why" behind a decision, you're hard to replace.
If you're leading a team
Set a simple rule: adopt tools that save time and raise quality - then reinvest the time in better advice and service. Make it normal to kill pilots that don't meet the bar.
Write a one-page AI policy everyone can understand. What we use, why we use it, how we check it, and where a human must decide. Clarity beats a 40-page PDF no one reads.
The opportunity in front of the sector
The profession is being rewritten in real time. Those who move past transactions, lean into strategic guidance, and uphold trust at every turn will define what "good" looks like in an AI-first era.
If you want structured upskilling paths by role, this resource can help: AI courses by job. Start small. Measure. Keep what works. And keep your standards high.
Bottom line: AI will take the busywork. You take the responsibility. That's where the career upside lives.
Your membership also unlocks: