How Existing Liability Frameworks Can Handle Agentic AI Harms
Calls for sweeping new AI liability regimes are loud, but often unnecessary. For most agentic AI in use today, negligence and products liability already provide workable tools. With a few targeted adjustments, these doctrines can allocate risk, incentivize safety, and avoid chilling useful deployments.
A law-and-economics lens: incentives first
Liability is about shaping behavior. Developers are typically the least-cost avoiders: they pick data, training regimes, model design, and safeguards. They should be expected to prevent foreseeable harm, improve accuracy where reasonable, and make systems explainable enough for oversight.
But users are not passengers. If a clinician knows an AI tool's error rate and still treats it as infallible, some responsibility shifts to the user. Accountability should scale with the user's capacity to understand and manage the risk. Clear warnings from developers are key to making that shift fair and effective.
There's a policy tension worth acknowledging. AI systems often generate public benefits that exceed the private returns developers or early users can capture. If we impose liability too aggressively during early deployment, we risk slowing learning-by-doing that could reduce accidents and improve outcomes over time.
How current doctrines already apply
Negligence is the default. Victims must show breach and causation, which is hard with opaque systems. The classic risk-benefit calculus applies-if the cost of precautions was lower than the expected harm, failing to adopt them is negligent. Causation is the sticking point when it's unclear whether better practices would have avoided the outcome.
Courts already handle that problem in medical malpractice with "loss of chance": damages reflect the probability that negligence caused the harm. A similar approach can fit AI incidents where precise causation is uncertain but risk was materially increased. This reduces information asymmetry that otherwise protects careless actors.
Victims still face steep discovery and proof challenges. The European Union even considered a rebuttable presumption of causality for AI-related harm, given asymmetries in access to evidence. Without tools like presumptions or structured disclosure duties, compensation and deterrence both suffer.
Products liability: a better anchor for developer risk
Products liability offers two relevant pathways. Design defects rely on a risk-utility test: could the product reasonably have been made safer. Manufacturing defects impose strict liability when a product deviates from its intended design and causes harm.
Most AI failures stem from training choices, not broken components, so they're treated as design defects. That limits strict liability's reach. A sensible recalibration is to treat materially flawed or biased training data that degrade performance as functionally akin to a manufacturing defect. That would put strict liability on developers for substandard training pipelines while keeping negligence for user misuse.
The duty to warn remains central. Developers should provide specific, comprehensible warnings-quantified error rates, known failure modes, prohibited use cases, and supervision requirements. The more complex the system, the heavier the duty to translate risk into operational guardrails users can follow.
Practical playbook for counsel
- Contract allocation: Warrant data provenance and training quality; require update commitments, safety patches, and post-market monitoring. Build in audit rights, model change logs, and evidence preservation.
- Documentation: Maintain model cards, data lineage, validation protocols, and red-team results. Record risk-utility tradeoffs contemporaneously.
- Warnings and UX: Present quantified error rates and clear limits in-product. Gate high-risk features, default to safe settings, and require human sign-off where appropriate.
- Monitoring and recall: Log inputs/outputs, trigger incident response on safety thresholds, and define recall or rollback processes for harmful behaviors.
- Insurance and indemnities: Require vendors to carry appropriate lines (products/completed ops, tech E&O, cyber). Calibrate caps and exclusions to deployment risk.
- User accountability: Train end users, certify competencies for high-risk uses, and implement acceptance-of-risk flows where warnings are explicit.
- Causation support: Preserve data needed for "loss of chance" analysis-evaluation datasets, A/B test results, version history, and chain-of-custody for outputs.
- Governance: Stand up a cross-functional review board, schedule periodic safety assessments, and refresh the duty to warn with each material model update.
Targeted policy tweaks that help
- Evidence access: Limited presumptions of causation or adverse inferences when developers control key evidence but fail to preserve or disclose it.
- Disclosure duties: Statutory obligations for safety logs, significant changes, and known hazards, paired with safe harbors for good-faith reporting.
- Risk sharing for high-value use cases: Consider capped liability or public insurance pools where systems plausibly yield broad safety gains over time, while maintaining strict standards for negligent conduct.
Bottom line
Agentic AI looks more like a product than a legal alien. Hold developers responsible as least-cost avoiders-especially for training and design choices. Hold users responsible when they ignore clear warnings and use systems in risky ways. Reframe serious training-data flaws as "manufacturing-like" to trigger strict liability, and borrow loss-of-chance where causation is murky. That's a recalibration, not a reset.
Cornell LII: Products liability overview
European Commission: AI liability initiatives
If your legal team needs practical AI fluency for contracting and governance, explore curated training by job role here: Complete AI Training.
Your membership also unlocks: