Allianz partners with Anthropic to advance responsible AI in insurance
Allianz SE has partnered with Anthropic to accelerate responsible AI adoption across the group. The work focuses on practical gains for underwriting, claims, and operations-without sacrificing governance, accuracy, or customer trust.
Oliver BΓ€te, chief executive at Allianz SE, said: "With this partnership, Allianz is taking a decisive step to address critical AI challenges in insurance. Anthropic's focus on safety and transparency complements our strong dedication to customer excellence and stakeholder trust. Together, we are building solutions that prioritise what matters most to our customers while setting new standards for innovation and resilience."
Three projects with direct operational impact
- Empowering people and reimagining code with AI: Internal assistants help employees write, review, and refactor code, document processes, and summarise policy or claims data. Expect faster iteration on rating logic, cleaner legacy integrations, and fewer manual handoffs.
- Custom AI agents for multistep workflows (with humans in the loop): Orchestrates tasks like claims triage, subrogation prep, policy changes, and first notice of loss. Agents route edge cases to experts, keep approvals visible, and reduce cycle time without removing human judgment.
- Transparency and compliance by default: Co-developed systems log every decision, rationale, and data source. This supports auditability, model risk management, and regulatory needs specific to insurance.
Why this matters for insurers
- Accuracy as a core metric: The partnership sets higher bars for factuality and traceability so teams can trust outputs in pricing, claims, and customer communication.
- Human oversight built in: High-impact steps keep adjusters, underwriters, and compliance in control, reducing operational risk while improving throughput.
- Audit-ready AI: End-to-end logs simplify model review, incident response, and external audits-important as AI rules tighten across markets. See the EU AI Act overview.
What leaders should do next
- Pick two workflows with high manual load and clear KPIs (e.g., claims intake notes to structured data, endorsements processing).
- Instrument baseline metrics: handling time, rework, leakage, complaint rates, and accuracy thresholds for production use.
- Pilot with human-in-the-loop gates; require explanations and data sources for key decisions.
- Stand up logging, access controls, PII handling, and red-teaming before scaling.
- Update vendor risk reviews and model governance policies to reflect AI-specific controls.
Dario Amodei, chief executive and co-founder at Anthropic, said: "Insurance is an industry where the stakes of using AI are particularly high - the decisions can affect millions of people. Allianz and Anthropic both take that very seriously and we look forward to working together to make insurance better for those who depend on it."
Where this could show early results
- Claims: Triage, liability hints, document extraction, and fraud pattern flags-with clear audit trails.
- Underwriting: Risk summarisation from broker submissions, policy wording comparisons, and code refactoring for rating components.
- Operations: Knowledge assistants for frontline teams, guided responses for complex queries, and compliant call summaries.
For background on AI safety practices from Anthropic, see their approach to responsible AI development here.
Upskilling your team
If you plan to pilot Claude-based tools and need practical training for analysts, engineers, or product owners, explore this focused program: AI Certification for Claude.
Your membership also unlocks: