Cut the grunt work, keep the judgment: Insurance leaders fear AI-dependent junior underwriters

Leaders warn junior underwriters may dull judgment leaning on chatbots-'garbage in, garbage out.' Use AI to speed grunt work, but verify and keep the final call human.

Categorized in: AI News Insurance
Published on: Dec 06, 2025
Cut the grunt work, keep the judgment: Insurance leaders fear AI-dependent junior underwriters

'Garbage in, garbage out': Leaders fear junior underwriters who can't think without AI

AI is seeping into every corner of underwriting. The tech isn't what keeps executives up at night - it's the risk that junior talent stops building judgment because a chatbot feels faster and confident.

At a recent industry event in Toronto, CFC Underwriting CEO Kate Della Mora was blunt: "Yes, 100%," she said when asked if early-career staff could get too reliant on AI. Her point: critical thinking is the skill that must be protected and trained - because, as she put it, "it's garbage in, garbage out."

The real risk: dulling judgment, not losing jobs

Della Mora isn't worried AI will replace junior underwriters. She's worried it will blunt their ability to think, question, and verify.

She noted that large language models can sound confident while being wrong - and less experienced users may not spot the errors. She also flagged reputational risk: if the internet is seeded with skewed reviews about a carrier, a naïve model can surface those as "facts." "I've asked ChatGPT questions, and it's wrong… it's going to spit that out to you as fact, and it's not. It's been created," she said.

"Stress-test what Copilot spits out"

Frederic Ling, SVP and head of specialty at Liberty Mutual, shares the concern. His worry isn't only about first drafts - it's that AI-assisted work can look convincing enough to pass a review without real substance behind it.

"My fear is that a junior underwriter can read a Copilot output and present it in a really convincing manner… and convince you that it is, in fact, true," he said. The fix: deliberate stress-testing. "See if there's enough layers behind the comment or the conviction to show you understand the risk or analysis." Without that discipline, teams can fall into a circular loop where chatbots reinforce each other's outputs and everyone assumes they must be right.

AI as assistant, not underwriter

Both leaders argue AI can strengthen underwriting - if it compresses grunt work and clears space for judgment. Della Mora pointed to complex submissions that used to take hours or days to parse; models can extract the pertinent points fast, so humans can spend time on context, relationships, and risk decisions.

Ling told a story from early in his career: three or four days locked in a room scraping financials for a complex banking risk. When asked for his view, he didn't have one - he'd only gathered data. That's what should change. Data and transformation tools should take us 80% of the way; the final 20% is subjectivity, context, and the commercial relationship.

Practical guardrails for underwriting teams

  • Adopt a "two-source rule" for critical facts. No bound policy should rely on a single AI-generated claim without independent verification.
  • Require a reasoning trail. If AI helped, attach the prompt, output, sources, and the underwriter's own analysis and conclusion.
  • Institutionalize stress-tests. For any AI-assisted recommendation, ask: What would change my view? What's the strongest counter-case?
  • Set decision thresholds. Define which decisions AI can inform, which require senior sign-off, and which are out of bounds.
  • Audit for "circular sourcing." Ban referencing one model's output to justify another's. Trace facts back to primary documents or authoritative data.
  • Use established governance frameworks such as the NIST AI Risk Management Framework for model use, documentation, and monitoring.

How junior underwriters can build judgment with AI in the loop

  • Write your hypothesis first. Before prompting, jot your initial view of the risk drivers and deal breakers.
  • Interrogate the output. Ask the model for assumptions, missing data, and edge cases. Then verify those against filings, loss data, and broker intel.
  • Talk to humans. Call the broker or client to test anything that feels off. Relationship nuance often decides the last 20%.
  • Keep a "hallucination file." Log every confident but wrong AI answer your team finds. Review it monthly to sharpen instincts.
  • Show your work. Your manager should see what AI produced, what you changed, and why you made the final call.

What this means for leaders

AI should shorten the path to a clean dataset, not the path to a decision. The win is faster synthesis, stronger point-of-view, and better client conversations - not outsourcing judgment.

The message to junior talent is simple: use the tool, but think for yourself. The message to leaders is clearer: train critical thinking, enforce stress-testing, and make the last mile of underwriting unmistakably human.

If your team needs structured upskilling on prompt craft, review workflows, and AI literacy for underwriting, explore our courses by job for practical, hands-on training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide