Who Covers the Damage When an AI Agent Goes Rogue? This Startup Has an Insurance Policy for That
The Artificial Intelligence Underwriting Company (AIUC) has just raised $15 million in a seed round led by Nat Friedman at NFDG, with backing from Emergence, Terrain, and angels including Anthropic cofounder Ben Mann and former CISOs from Google Cloud and MongoDB. Their mission is clear: to build the insurance, audit, and certification systems necessary for safely integrating autonomous AI agents into enterprises.
AIUC’s CEO Rune Kvist emphasizes that insurance for AI agents—autonomous systems making decisions and taking actions without constant human oversight—is about to become a significant market. Kvist, who was Anthropic’s first product and go-to-market hire in 2022, leads a founding team including CTO Brandon Wang, a Thiel Fellow with experience in consumer underwriting, and Rajiv Dattani, a former McKinsey partner with deep expertise in global insurance and AI model evaluation.
Creating Financial Incentives to Reduce AI Agent Risk
At the core of AIUC’s strategy is a new risk and safety framework called AIUC-1, tailored specifically for AI agents. This framework integrates existing standards such as the NIST AI Risk Management Framework, the EU AI Act, and the MITRE ATLAS threat model. It adds auditable, agent-specific safeguards to provide enterprises with trust signals comparable to those in cloud security or data privacy.
Kvist highlights the role of insurance in driving risk reduction: “Insurance creates financial incentives to reduce risk. We track where failures happen and what problems need solving. Insurers often require certain safeguards to be in place before certification.”
While other startups are working on AI insurance, AIUC stands out by building a comprehensive agent standard to prevent risks. John Bautista, a partner at Orrick who helped develop AIUC-1, adds that companies face legal uncertainties with AI adoption. “AIUC-1 offers a clear, consolidated standard that simplifies compliance amid evolving laws and frameworks,” he says.
The Need for Independent Vendors
Insurance has historically played a key role in American innovation. From Benjamin Franklin’s mutual fire insurance company to the rise of UL Labs and automotive crash-test standards, independent bodies have driven safety and trust. AIUC bets on the same pattern repeating for AI.
“It’s not Toyota that does car crash testing; independent organizations do,” Kvist explains. “We need an independent ecosystem to answer if we can trust these AI agents.” AIUC plans to provide a three-part system:
- Standards: AIUC-1 sets a technical and operational baseline.
- Audits: Independent testing challenges AI agents to identify failures, hallucinations, data leaks, or dangerous behavior.
- Liability Coverage: Insurance policies protect customers and vendors if agents cause harm, with pricing tied to safety performance.
For example, if an AI sales agent leaks customer personal data or a financial assistant fabricates policy details, AIUC’s insurance could cover the resulting losses. The goal is to push AI vendors toward better practices by offering better insurance terms for passing AIUC-1 audits—similar to how car insurance rates improve with safety features.
Using Insurance to Align Incentives
AIUC believes market mechanisms, not only government regulation, can foster responsible AI development. Top-down rules are difficult to perfect, and voluntary safety promises from major AI companies have already been scaled back.
Insurance offers a third path to align incentives and evolve with technology. Kvist compares AIUC-1 to SOC-2, a widely accepted security certification that signals trust to enterprise clients. He predicts AI agent liability insurance will become as essential as cyber insurance, growing into a $500 billion market by 2030.
AIUC is already collaborating with enterprise customers and insurance partners, aiming to become the industry benchmark for AI agent safety.
Nat Friedman, former GitHub CEO, understands the trust challenges firsthand. When launching GitHub Copilot, he witnessed customers’ concerns about intellectual property risks. After a brief pitch meeting, Friedman decided to invest in AIUC’s seed round before joining Meta’s Superintelligence Labs.
“These agents promise to do more work autonomously,” Kvist says. “That raises liability significantly—and with it, interest in insurance.”
Your membership also unlocks: