AI Just Took a Board Seat-Does Your Insurance Cover Its Decisions?

AI is entering the boardroom, but D&O was built for humans-leaving gaps. Update definitions, add endorsements, and tighten governance to limit claims and regulatory heat.

Categorized in: AI News Insurance
Published on: Nov 07, 2025
AI Just Took a Board Seat-Does Your Insurance Cover Its Decisions?

Boardroom Bots: Is Your AI-Powered Director Covered By Insurance?

AI is stepping into the boardroom. It can review data, recommend strategy, and even cast votes. That's exciting, but here's the catch: your insurance program was built for human decision-makers.

If your organization is piloting an AI "director" or assigning board-level authority to an AI system, treat it as a distinct risk. The coverage gaps are real, and they won't fix themselves.

The core issue: Is an AI a "Director or Officer" under D&O?

Most D&O policies define an Insured Person as a natural person. An AI is not. That means claims tied directly to an AI "director's" decisions may fall outside Side A/B coverage for individuals.

Side C (entity securities coverage) may still respond to securities claims against the company, but that won't close the gap for non-securities claims or protect human directors accused of inadequate oversight of the AI. Without endorsements, the AI itself is typically uninsured under D&O.

Where claims are likely to come from

  • Derivative suits: Shareholders allege breach of fiduciary duty for delegating critical decisions to an AI or failing to supervise its use.
  • Securities actions: Statements about AI capability, controls, or financial impact turn out to be inaccurate, leading to stock drops and suits.
  • Regulatory investigations: Agencies question governance, model risk management, explainability, bias controls, or disclosures.
  • Employment claims: AI-influenced hiring, promotion, or termination decisions trigger discrimination claims.
  • Contract disputes: Counterparties challenge AI-driven decisions or outputs as breaches of agreed standards.

Your insurance tower: what likely responds-and what likely won't

  • D&O: Strong for securities claims and oversight allegations against human directors and officers. Weak for covering the AI itself due to "natural person" definitions. Consider endorsements addressing non-human decision tools and clarifying that AI-driven decisions fall within "wrongful acts."
  • Cyber: May respond to data breaches, privacy events, or system failure tied to AI integration. Look for coverage for system outage, algorithmic error, and media liability (defamation/IP) from AI outputs; many policies still exclude or limit these.
  • Tech E&O/Professional Liability: Key if you build, license, or rely on a vendor's AI. Secure vendor indemnity, additional insured status, and evidence of limits that match your exposure.
  • EPL: Add or confirm third-party discrimination and coverage for automated decisioning in HR workflows. Make sure "employment decisions" exclusions in other lines don't eat into your protection.
  • Fiduciary (ERISA): If AI informs plan administration or investments, verify coverage for alleged breaches tied to those recommendations.
  • Crime: Funds transfer fraud is tricky if an AI "authorizes" payment. Many forms require "fraudulent instruction" from an outsider. Clarify terms for AI-triggered transactions and human verification protocols.

Policy wording hot spots to review now

  • Who is an Insured? Add language that addresses AI systems acting under board mandate and clarifies coverage for the entity and human overseers.
  • Wrongful Act: Ensure it captures decisions informed or executed by AI and includes failures in model oversight, governance, and validation.
  • Professional services exclusions: Avoid broad exclusions in D&O that could bar claims tied to AI-enabled operations.
  • Conduct exclusions: These often apply to Insured Persons. Confirm they don't create odd outcomes when the "actor" is a system.
  • Investigations and inquiries: Expand "Claim" to include pre-claim investigations by regulators focused on AI risk.
  • Insured vs. insured: Preserve carve-outs for derivative claims and whistleblower matters that may center on AI decisions.
  • Contractual liability: If your AI workflows are bound by SLAs, confirm you're not boxed out of coverage by contract-only exclusions.

Governance moves that reduce loss-and help your claims

Underwriters price uncertainty. Reduce it. Your goal is visible, repeatable control over how AI influences decisions.

  • Define authority: Put in writing where the AI advises, where it decides (if at all), and where a human must approve.
  • Assign ownership: Board committee or risk function with clear escalation triggers, KPIs, and kill-switch authority.
  • Document rigor: Model cards, data lineage, validation results, bias testing, and post-incident reviews. Make them discoverable and audit-ready.
  • Third-party controls: Due diligence, testing rights, indemnities, and ongoing monitoring for vendor models and updates.
  • Human-in-the-loop: For high-impact calls (M&A, capital deployment, workforce actions), require second-party review and sign-off.
  • Disclosure discipline: Marketing and investor communications must match reality. Overpromising AI capability is a fast path to securities risk.

A simple checklist for insurance teams

  • Map where AI touches board-level or material decisions.
  • Identify which claims would hit D&O, Cyber, Tech E&O, EPL, Fiduciary, and Crime.
  • Run a wording gap analysis: definitions, exclusions, investigations, and severability.
  • Request endorsements for AI-influenced decisions and non-human decision tools.
  • Push vendors to carry Tech E&O with limits that reflect your downside; secure additional insured and indemnity.
  • Tighten incident response to include model failure and AI output risk alongside cyber events.
  • Educate the board on what is and isn't covered. Set expectations now, not after a claim.

Two realistic claim scenarios

  • AI-driven acquisition misstep: The AI recommends a deal based on flawed data. Post-close losses lead to a derivative suit alleging failure of oversight. D&O may respond for human directors; the AI itself is not an insured-highlighting the definitional gap.
  • Algorithmic bias in hiring: An AI screens candidates and skews results against a protected class. Class action follows. EPL could respond if third-party discrimination is included; Cyber or Tech E&O might be implicated depending on outputs and vendor roles.

Standards worth aligning to

You don't need to reinvent AI governance. Build your controls around widely recognized frameworks and document how you comply. It helps underwriting and claims handling.

How to brief your broker and carriers

  • Explain the AI's role, decision rights, and override controls in plain terms.
  • Share governance artifacts: policies, testing results, and monitoring cadence.
  • Ask for specific language solutions: insured definition tweaks, wrongful act expansion, and investigation cost coverage tied to AI issues.
  • Confirm claim reporting triggers for model failures and regulatory inquiries focused on AI.

Bottom line

AI can help the board make faster, more informed decisions. It also introduces a coverage gap if you treat it like a human director on a human policy form.

Tighten your definitions, close exclusions, align to known frameworks, and get vendor risk off your balance sheet. If you do that, you'll keep control of both the boardroom and the claim file.

Want to upskill your team on AI oversight?

If your board or risk function needs practical training on AI systems and controls, explore role-based options here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide