Leading Insurtech, Counterpart, Addresses Critical Coverage Gap With Affirmative AI Coverage
AI is now embedded in underwriting, claims, distribution, and client operations. That creates new loss paths that many policies handle poorly or not at all. An affirmative AI endorsement signals intent to cover, not avoid, these exposures.
If you place management liability, professional liability, cyber, or EPL for AI-adopting clients, this matters. Below is a practical framework to evaluate any "affirmative AI coverage" offering and prepare your book.
The gap this likely targets
- Silent AI risk: Policies neither exclude nor clearly cover AI-caused losses, leaving adjusters to interpret gray areas.
- Model-assisted services: E&O claims when AI suggestions lead to client errors or missed advice.
- Content and IP: Copyright or trademark allegations tied to AI-generated text, images, or code.
- Data misuse: Training on restricted data, scraping claims, or improper reuse of personal information.
- Bias and employment decisions: Automated screening tools triggering EPL claims for discrimination.
- Regulatory scrutiny: Investigations around disclosures, fairness, data handling, or AI governance.
What "affirmative AI coverage" usually means
In market practice, carriers move from silence to explicit terms. The goal is to define AI-related acts, confirm coverage where intended, and set clear bounds where risk is uninsurable or needs underwriting discipline.
Key features to look for in any AI endorsement
- Clear definitions: "Artificial intelligence," "automated decision system," and "model-assisted services." Ambiguity is the enemy at FNOL.
- Professional services clarity: Coverage for services delivered with AI assistance, not just manual work.
- Third-party model exposure: Vicarious liability when insureds use external tools or APIs.
- IP and content: Allegations stemming from AI outputs (copyright/trademark), including defense.
- Privacy and data: Claims tied to training data, scraping, datasets with personal or sensitive information.
- Regulatory defense: Coverage for formal investigations related to AI use, where insurable.
- EPL tie-in: Use of automated hiring or HR tools leading to discrimination claims.
- Cyber coordination: How the endorsement interfaces with cyber for data incidents driven by AI tools.
- Exclusion alignment: Remove or narrow catch-all tech exclusions that would swallow the grant.
- Retroactivity: Many insureds already use AI. Confirm how prior use is treated.
- Panel vendors: Access to legal, forensics, and bias/audit specialists with AI expertise.
Underwriting signals to assess and reward
- AI inventory: systems in use, business processes affected, and data types involved.
- Human in the loop: approvals, overrides, and documentation of decisions.
- Testing and QA: pre-deployment testing, monitoring, and rollback plans.
- Bias and performance checks: periodic reviews with evidence of fixes.
- Vendor management: contract indemnities, SOC2/ISO posture, and incident SLAs.
- Data governance: privacy-by-design, data minimization, retention, and access controls.
- Disclosure controls: marketing/legal review for AI claims to reduce deceptive practices risk.
Broker checklist for conversations with carriers
- Which policies include the AI grant (E&O, D&O, EPL, cyber), and how do they coordinate?
- What AI-related exclusions remain, and where are the bright lines?
- Does coverage extend to third-party tools and open-source models?
- Are regulatory inquiries covered (defense only, sublimits, carve-outs)?
- How are training data and model outputs treated for IP claims?
- What documentation should insureds maintain to support a claim?
- Any risk controls that unlock better pricing or higher limits?
Claims scenarios to pressure-test wording
- AI-assisted advice: Consultant uses a model to draft a compliance plan. Client faces a regulatory fine due to an error in the output and sues for negligence.
- Hiring bias: HR team uses a screening tool that filters out a protected group. Class action alleges discrimination.
- IP demand: Marketing publishes AI-generated images. Rights holder claims infringement and seeks damages.
How insureds can prepare now
- Map AI use across teams; document human oversight and approvals.
- Review contracts with AI vendors for indemnity, data rights, and incident duties.
- Update internal policies: acceptable use, data handling, model-testing standards.
- Train staff on prompt hygiene, confidentiality, and approval paths.
- Align insurance: endorsement language, limits, and notice requirements.
Helpful resources
- NIST AI Risk Management Framework for structure around AI risk controls.
- EEOC guidance on AI and employment decisions to reduce EPL exposure.
Level up team readiness
If clients are scaling AI, your team needs a shared baseline on safe use, oversight, and documentation. Focus training on practical workflows that reduce loss frequency and claim friction.
- AI courses by job function for fast, role-specific upskilling.
- AI automation certification to standardize controls and audit trails.
Bottom line: An affirmative AI grant is a welcome step, but the details decide claims outcomes. Press for clarity, document controls, and align endorsements with how your insureds actually work.
Your membership also unlocks: