AI Litigation And What It Means For D&O Insurance
AI is reshaping how companies operate-and how they get sued. Misstatements about AI capabilities, weak disclosures, and model-driven failures are feeding shareholder suits, regulatory actions, and correlated losses that hit multiple insurance lines at once.
For carriers and brokers, this isn't theoretical. It's claim activity today, with more coming as expectations rise and results disappoint.
Why shareholder suits are spiking
AI-focused companies (or those leaning on AI for core products) often carry premium valuations. When results miss, the drop can be fast-and expensive. That creates fertile ground for securities class actions (SCAs).
From March 2020 through June 30, 2025, there were 53 SCAs with AI-related allegations. Filings more than doubled in 2024 compared to 2023. Many are early-stage, but most will generate covered defense costs under D&O, subject to retentions and terms.
The AI washing problem
"AI washing" is the headline allegation: inflating or misrepresenting AI capabilities, proprietary tech, or the role AI plays in the business. When reality surfaces, valuations slide and suits follow.
- Presto Automation Inc. - the SEC settled charges tied to materially false and misleading statements about an AI product.
- Innodata - investors alleged the company touted a proprietary AI platform while relying heavily on manual offshore labor.
- Tempus AI - a class action claims the company overstated AI capabilities following a short-seller report and price decline.
- Evolv Technologies - allegations include misleading statements about AI-based weapons detection effectiveness.
- Telus International - suit claims the company failed to disclose that AI data solutions cannibalized higher-margin offerings, pressuring profitability.
Regulatory pressure is mounting
SEC: Expect tighter scrutiny of AI claims in filings and marketing. Misstatements by public companies and registered advisers have drawn investigations, enforcement actions, and settlements. D&O and professional lines often respond to formal inquiries, subject to wording and definitions.
DOJ: Criminal actions have targeted AI-centric companies where alleged fraud and obstruction were involved (e.g., a social app startup case with wire and securities fraud charges).
FTC: The Commission is folding AI into its governance structure, emphasizing transparency, accountability, and public benefit in how companies build and deploy AI.
States are writing their own rules
States are moving fast, especially where AI touches consumer risk and discrimination.
- Colorado: The Colorado AI Act requires developers of high-risk AI systems to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. Read the bill
- California: New obligations target large AI providers to disclose steps taken to reduce catastrophic model risks.
Fraud, negligence, and deceptive practices
- Joonko: Regulators allege classic fundraising fraud dressed in AI language.
- IRL: Prosecutors claim the user base was largely bots while investors were told a different story.
- OpenAI case: A suit alleges negligence and deceptive practices tied to harmful advice from a model; litigation is ongoing.
Systemic exposure and correlated claims
AI creates correlated risk. If one widely used AI tool fails-or if an industry broadly applies similar models-the same error can drive a surge of claims against many insureds or multiple actions against a single insured across D&O and E&O.
We have a precedent. In 2005, hundreds of auto insurers were hit with a class action over claim software alleged to underpay certain motorist claims. Settlements topped $1 billion. AI could repeat that pattern at greater scale, especially if model-driven decisions replace professional judgment without clear disclosure.
Underwriting: what's changing
Carrier responses are still forming, but two shifts are clear: tighter scrutiny of AI-related disclosures, and higher rates/retentions for public companies with notable AI exposure.
Expect pressure on wording (definitions of "claim," "investigation," and "wrongful act," plus conduct exclusions and severability) and on how AI-related statements in offering docs, earnings calls, and marketing are vetted.
Practical questions for brokers and underwriters
- AI footprint: Where is AI embedded-core product, go-to-market, finance, pricing, claims, security?
- Disclosure controls: Who signs off on AI claims in SEC filings, marketing, and investor decks? Is there model governance review before statements are made?
- Evidence and testing: How are capabilities validated (benchmarks, audits, red-teaming, third-party assessments)? Are limits and failure modes disclosed?
- Vendor and model risk: Which models and providers are in use? What indemnities, SLAs, and monitoring are in place? Is there concentration risk around a single tool or API?
- Bias and safety: What processes address algorithmic discrimination, privacy, and data provenance? How are complaints tracked and remediated?
- Incident playbooks: Do they cover model drift, hallucinations, outages, and misuse? Who communicates with investors and regulators if performance claims don't hold?
- Board oversight: Is there a standing committee or cadence for AI risk? Are minutes and materials maintained to evidence oversight?
Policy features to revisit
- Side A/B/C balance: Ensure adequate Side A for individuals if entity coverage gets eroded by parallel class actions.
- Investigation coverage: Confirm coverage triggers for informal vs. formal regulatory activity (SEC, FTC, DOJ, state AGs).
- Conduct exclusions: Push for final, non-appealable adjudication wording and strong severability.
- Interrelated claims: Watch definitions that could collapse separate AI matters into one limit.
- E&O interplay: Map how D&O, E&O, cyber, and media respond if the same AI issue hits customers and investors.
Risk controls that cut loss severity
- Plain-English disclosures: State what the model does, where it fails, and known limitations. Avoid hype.
- Governance and documentation: Keep testing logs, third-party reviews, bias assessments, and approval trails for investor-facing claims.
- Change management: Tie product updates to disclosure updates; re-verify performance before earnings calls and fundraising.
- Human-in-the-loop: Maintain human review where model errors create high severity outcomes.
- Vendor diversification: Reduce single-point-of-failure risk across models and providers.
What's next
Valuations tied to AI expectations look a lot like prior bubbles. If sentiment breaks, litigation will escalate. Transparent disclosures, disciplined model governance, and sharper underwriting will be the difference between manageable defense costs and correlated, capital-draining losses.
For claim handlers and underwriters, track AI-related SCAs through public resources like the Stanford Securities Class Action Clearinghouse. Keep a live register of insureds with material AI exposure and refresh it quarterly.
Upskilling teams
If your underwriting, claims, and risk engineering teams need a shared baseline on AI concepts and disclosures, consider role-based courses that focus on practical use and risk. Explore courses by job.
Your membership also unlocks: