Davies builds flexibility into Vision 2030 as AI compliance rules diverge
Davies is building optionality into its Vision 2030 roadmap so insurers can adopt AI at their own pace - and on their own terms. Group chief executive Dan Saulter put it simply: "Our job in the end is to satisfy our clients' demands and to satisfy the needs of the regulators and states."
With new AI rules emerging across different jurisdictions, Davies is investing in teams that track both risks and opportunities. The outcome is a product strategy that lets clients choose where, when, and how automation is used.
Vision 2030: growth with guardrails
Launched in January 2025, Vision 2030 is built on four pillars, including operational excellence and technology and AI investment. The target: grow global revenues to £2.5bn-£3bn over five years.
The plan centers on pragmatic automation, not wholesale replacement of people. Human oversight stays in the loop where clients or regulators require it.
ClaimPilot's agentic AI: two clear jobs
Davies has added agentic AI features to its ClaimPilot suite to take friction out of casualty claims handling. Two AI agents support adjusters and handlers so they can focus on higher-value decisions.
- Claim opening agent: Automates first notice tasks, reads and interprets documents, and flags for handler input when needed.
- Validation and injury valuation agent: Automates portions of claim validation and injury valuation to accelerate cycle time.
Some clients want more automation, faster. Others want to wait and see. Saulter's take: give both groups what they need without forcing a single path.
Compliance by design: switches, scopes, and human checks
Davies is building controls directly into ClaimPilot: features can be switched on or off, applied by territory, or limited to specific clients. If a regulator or a carrier requires human quality assurance, Davies enables a human check and balance.
This matters because rules vary. The EU's AI Act, for example, introduces new risk classifications and obligations for providers and users. See the official summary for context: EU AI Act.
What insurers should do now
- Request feature toggles by jurisdiction, line of business, and workflow step.
- Define where human-in-the-loop review is mandatory, and require auditable QA logs.
- Set thresholds for when the AI asks for handler input (e.g., document ambiguity, injury severity).
- Run scenario tests on edge cases: multi-injury claims, conflicting medical reports, fraud signals.
- Align governance with emerging standards (e.g., risk assessments, model monitoring, data lineage).
- Upskill claims teams on AI supervision and exception handling. If you need structured options by role, explore AI training by job function.
Bottom line
Davies isn't betting on one version of AI adoption. It's building for different risk appetites and different rulebooks - with clear on/off switches and human oversight where it counts. For insurers, that means you can move forward now, pilot safely, and expand as your controls and confidence grow.
As Saulter put it: "Where the client really requires us to have a human check and balance or a team doing quality assurance, that's what we're going to do." That flexibility is the strategy.
Your membership also unlocks: