Tego eyes standalone AI cover as traditional policies tighten
Tego Insurance is exploring what it says could be Australia's first dedicated AI insurance product as insurers move to restrict AI exposures in professional indemnity, malpractice and liability. The proposed policy would respond to losses arising from AI tools, models and automation used in professional settings, with early focus on healthcare.
Eric Lowenstein, CEO at Tego, says AI risk is moving faster than conventional wordings can adapt. He expects broad AI exclusions to appear across core professional lines and believes a greenfield policy will be cleaner than retrofitting AI into legacy forms.
Why this matters for insurers
AI now influences decisions across healthcare, financial services, logistics and professional services. That shifts causation and control away from purely human decision-making, which many existing policies were built around.
As exclusions widen, silent AI exposure becomes a portfolio problem. Clear triggers, definitions and aggregation language will be needed to avoid disputes at claim time.
Healthcare: first pressure point
Hospitals and clinics are deploying AI for clinical decision support, imaging, triage, remote monitoring and admin tasks. Medical Malpractice and Professional Indemnity typically respond to clinician error; they are less clear when AI assists or recommends.
There is also uncertainty around new AI regulations and how breaches will be treated. Other lines may pick up pieces (Product Liability for devices/software harm, Tech E&O for developer fault, D&O for oversight failures, Clinical Trials, Cyber), but exclusions are already surfacing there too.
What a standalone AI policy could include
- Triggers: AI-assisted or automated decision error; model failure or drift; data poisoning; configuration mistakes; automation breakdowns; third-party model or API failure.
- Loss types: Bodily injury, financial loss, privacy breaches, regulatory investigations and defense, civil penalties where insurable by law, business interruption from AI service failure, recall/withdrawal of faulty models or versions.
- Insureds: Providers and hospitals, group practices, digital health platforms, AI vendors and integrators, and entities relying on AI within clinical workflows.
- Systemic risk controls: Model/version schedules, change-management clauses, logging and audit obligations, vendor oversight requirements, sublimits and event caps.
- Likely exclusions: Deliberate misuse, unapproved model use, failure to maintain human oversight where required, IP or data misuse outside defined parameters, war/terrorism carve-outs mirrored from cyber.
Underwriting signals to price and select risk
- AI governance: inventory of models, version control, approval process, rollback plans.
- Validation and monitoring: bias testing, performance thresholds, drift detection, human-in-the-loop checkpoints.
- Data quality: provenance, consent, PHI handling, de-identification, retention and access controls.
- Incident response: AI-specific playbooks, forensics on prompts/logs, regulatory notification paths.
- Vendor risk: contractual indemnities, audit rights, uptime SLAs, update cadence, third-party certifications.
- Regulatory alignment: mapping to frameworks like the EU AI Act and the NIST AI RMF.
NIST AI Risk Management Framework can help structure controls, and the EU's rules provide a view of where oversight is heading: EU AI Act overview.
Claims and wording implications
- Causation: Split responsibility between clinician, system, and vendor. Logging and explainability will be central evidence.
- Aggregation: One model/version could trigger many similar claims across facilities. Define event, occurrence and batch carefully.
- Coverage basis: Claims-made form with clear retro dates, discovery/ERP options, and model version schedules will reduce ambiguity.
- Defense: Panels need AI forensics, clinical expertise and regulatory counsel. Early containment and data preservation are key.
Broker checklist for clients now
- Audit all active policies for AI exclusions, endorsements and silent AI exposure.
- Map AI use across workflows; document model versions, vendors and decision points.
- Tighten contracts with AI vendors (indemnities, limits, audit rights, incident duties).
- Set notification protocols for AI-involved incidents; preserve logs and model outputs.
- Run scenario tests: misdiagnosis, imaging error, scheduling failure, data corruption.
- Brief boards on oversight duties; align with clinical governance and IT change control.
Reinsurance and capital
Expect clash risk across PI, malpractice, entity liability, product liability and cyber from the same AI event. Treaty language will need AI event and systemic triggers defined, plus modeling of tail risk and event caps.
Start building RDS for model failure scenarios, cloud/provider outages, and mass notification events tied to an AI update.
Timing and market outlook
Tego indicates first substantial AI products could land within 12-24 months. Early movers with clean wording, strong data requirements and clear aggregation controls will set the standard.
If your teams need practical AI fluency for underwriting or claims, see curated options by role: AI courses by job.
Your membership also unlocks: