Counterpart expands AI coverage for small businesses
November 26, 2025 - Counterpart, the Los Angeles-based Agentic Insurance platform, has expanded its Affirmative Artificial Intelligence (AI) Coverage and introduced a new Technology Errors and Omissions (E&O) Insuring Agreement. The update fortifies protection across its Miscellaneous Professional Liability (MPL) and Allied Health products for small businesses using AI in daily operations.
Why this matters for insurance professionals
AI is now embedded in marketing, customer service, and research workflows. Traditional policies haven't kept pace, leaving gaps around misinformation, discriminatory outputs, and faulty automated decisions. Affirmative coverage removes ambiguity so brokers aren't relying on "silent" policy language and hope.
Research highlighted by Harvard Law's Forum on Corporate Governance points to increasing exclusions and uncertainty around automated decision-making. That puts brokers and insureds in a bind unless coverage is explicit and clearly worded. Source
What's new in Counterpart's offering
- Affirmative AI Coverage: Explicit protection for claims tied to both first- and third-party AI tools.
- Error types in scope: Inaccurate AI-generated reports, biased model outputs, misclassified or mislabeled data, and flawed automated decision support in professional settings.
- Clarity across lines: Addresses areas where E&O, D&O, Cyber, and CGL are often silent or limited by exclusions.
- New Tech E&O Insuring Agreement: Adds targeted protection for technology-related professional services and products.
Where it fits
- Miscellaneous Professional Liability (MPL)
- Allied Health
- Technology E&O
Broker checklist: fast discovery on AI exposure
- Map AI use cases: marketing content, customer support, research, triage, screening, pricing, or decision support.
- Identify tools: in-house models vs. third-party APIs; confirm vendors, model versions, and update cadence.
- Data sources: training and input data, consent, PHI/PII handling, de-identification, and retention.
- Human-in-the-loop: who reviews outputs, thresholds for manual override, and escalation paths.
- Controls: pre-deployment testing, bias/accuracy monitoring, audit logs, and rollback plans.
- Contracts: vendor indemnity, service levels, limitation of liability, and IP rights around model outputs.
- Disclosures: client-facing disclaimers, documentation of limitations, and user guidance.
- Regulatory: discrimination, privacy, advertising rules, and sector-specific guidance for health/finance.
Claims scenarios to watch
- Biased screening: An AI-assisted hiring or patient triage tool allegedly discriminates against a protected class, triggering a complaint and damages claim.
- Faulty analysis: An AI-generated research brief or financial model contains inaccuracies that lead to client loss and a professional negligence allegation.
- Misleading marketing: AI-written materials overstate product capabilities, drawing regulatory scrutiny and client claims.
- Customer support errors: A chatbot provides incorrect compliance guidance, resulting in a client penalty and demand for recovery.
- Data misclassification: Model labels sensitive records incorrectly, causing mishandling of PHI/PII and downstream harm.
Underwriting focus areas
- Use cases and criticality to core services; potential for bodily injury, discrimination, or financial loss.
- Model governance: validation, drift monitoring, retraining process, and independent review.
- Documentation: data lineage, testing evidence, error budgets, and incident response.
- Third-party dependencies: vendor due diligence, audit rights, and contractual protections.
- Explainability: ability to reconstruct decisions and support defensibility in claims.
Placement and wording tips
- Ask for explicit "Affirmative AI" wording and a dedicated insuring agreement where available.
- Define "automated decision system," "AI tool," and "model output" to reduce disputes.
- Check carve-backs for discrimination, IP, contractual liability, and vicarious liability via third-party tools.
- Clarify boundaries between Tech E&O, MPL/Allied Health, and Cyber for data events vs. professional services errors.
- Confirm sublimits and conditions for regulatory proceedings, privacy events, and media/misinformation claims.
- Note reporting duties for material changes to models, vendors, or use cases.
Industry context
"AI-related risks are evolving. Coverage is by no means guaranteed by traditional E&O, D&O, Cyber, and CGL policies, which are 'silent' on such exposures and may be mitigated through AI-related exclusions or other policy limitations. To avoid any potential gap in coverage, companies should consider affirmative cover for the concentric circles of AI-related liability," said Ommid C. Farashahi, insurance coverage partner at BatesCarey LLP.
"AI risks have moved from theory to the courtroom," said Mike Muglia, Counterpart's professional liability lead. "These endorsements give our brokers practical solutions for claims that come from everyday AI use, whether it's bad outputs, decision errors, or machine-generated bias."
What it means for small businesses
As more teams use AI to move faster and serve clients better, liability grows in step. Counterpart's move brings clear, accessible protection to the front of the policy instead of leaving brokers to interpret gray areas after a loss.
With more than 28,000 policies placed through 2,800 brokers, the company is extending reach across new professions and risk profiles. For brokers, the takeaway is simple: surface AI use early, place affirmative wording, and keep the documentation tight.
Resources
- Discussion on governance and automated decisions: Harvard Law's Forum on Corporate Governance
- Help clients upskill on AI use cases by role: Complete AI Training - Courses by Job
Your membership also unlocks: