US courts and state legislatures tighten scrutiny of generative AI amid product liability surge and federal deregulation push

U.S. courts are seeing a surge in product liability suits against AI companies, covering wrongful death, deepfakes, and nonconsensual imagery. States are passing disclosure and safety laws faster than Congress can act.

Categorized in: AI News Legal
Published on: Apr 23, 2026
US courts and state legislatures tighten scrutiny of generative AI amid product liability surge and federal deregulation push

AI Companies Face Mounting Legal Pressure Over Safety, Data Use

U.S. courts are seeing a surge in product liability cases against generative AI companies, with disputes centering on training data sourcing and alleged harms from AI deployment. The cases span wrongful death claims, nonconsensual intimate imagery, and deepfakes-while AI companies themselves are mounting constitutional challenges to new state regulations.

Product liability plaintiffs argue that AI companies incorporated inadequate safety standards and defective designs. They allege failures to warn users of known dangers and use of manipulative design features that maximize engagement and foster emotional dependency.

Wrongful Death Claims Lead Litigation

Wrongful death claims form the bulk of product liability cases against AI companies. Plaintiffs typically represent estates of deceased individuals and assert negligence claims, arguing that companies failed to exercise reasonable care in system design and failed to provide adequate warnings.

An emerging category involves scenarios where plaintiffs allege that an AI product induced a user to harm a third party-someone who was not themselves a platform user. This expansion signals how courts are interpreting AI company liability.

Intimate Imagery and Deepfakes Create New Exposure

Product liability claims increasingly involve allegations that AI products generate nonconsensual intimate images and deepfakes, sometimes involving minors. Plaintiffs argue companies failed to implement adequate safety guardrails such as training-data filtering and image classifiers.

Federal and state enforcement is tracking this issue closely. The Take It Down Act prohibits publication of nonconsensual intimate images-authentic or computer-generated-and requires covered platforms to remove such content within 48 hours of a valid removal request. State attorneys general have raised concerns about whether AI companies are taking adequate prevention steps.

Constitutional Challenges to State AI Laws

As state legislatures enact AI laws imposing disclosure obligations, AI companies are filing constitutional challenges. California's Generative AI Training Data Transparency Act, which requires developers to publicly disclose training dataset summaries, faced a First Amendment challenge claiming compelled speech, and a Fifth Amendment challenge claiming unconstitutional taking of trade secrets.

A federal court denied the preliminary injunction motion, upholding the law. The court found that compelled disclosure of "purely factual and non-controversial information" is permissible when reasonably related to a substantial government interest. The court classified the disclosures as commercial speech subject to intermediate scrutiny rather than strict scrutiny.

A New York federal court similarly dismissed a First Amendment challenge to the state's Algorithmic Pricing Disclosure Act, which requires businesses using personalized pricing algorithms to disclose this to consumers. Together, these decisions signal judicial consensus that AI disclosure obligations will be analyzed under the commercial speech standard.

One AI company is separately challenging its government designation as a national security supply chain risk, asserting First Amendment violations based on alleged viewpoint discrimination and Fifth Amendment due process violations.

State Laws Tighten AI Chatbot Requirements

State legislatures are moving faster than Congress. California's S.B. 243, Washington's H.B. 2225, and Oregon's S.B. 1546 require AI chatbot operators to clearly disclose that users are interacting with an AI, implement safeguards against outputs that could induce suicidal thoughts, and apply heightened protocols for suspected minor users.

Several of these laws create private rights of action, giving plaintiffs new grounds for claims. Washington's H.B. 2225 and Oregon's S.B. 1546 fall into this category, as does California's A.B. 621, which applies to deepfake pornography victims. California's A.B. 316 explicitly prevents companies from using an "autonomous AI" defense to shield themselves from liability.

Federal Approach Favors Light-Touch National Standard

The Trump Administration is signaling a deregulatory posture toward AI through an AI Action Plan and a Department of Justice AI Litigation Task Force tasked with challenging state AI laws the Administration characterizes as "onerous."

However, this does not mean a regulation-free approach. The White House's National AI Legislative Framework calls for Congress to establish a "minimally burdensome national standard" that would preempt fragmented state regulations. The framework also encourages legislation to strengthen parental controls over children's privacy, prevent government coercion of content moderation, and explore collective negotiation frameworks for intellectual property compensation.

The bipartisan AI Foundation Model Transparency Act (H.R. 8094), introduced March 26, 2026, aims to establish transparency requirements for how foundation models are built, trained, and deployed. The bill would direct the FTC to set disclosure standards for high-impact foundation models.

Practical Guidance for Companies

One notable wrongful death case has reached a settlement in principle, though terms remain undisclosed. This reflects parties pursuing private resolution rather than extended litigation.

For organizations developing and deploying AI products, the legal environment is complex and fragmented. Without Congressional action, existing state AI laws remain in place and enforceable. The most prudent approach is to continue complying with state requirements until greater clarity emerges at the federal level.

Legal professionals should monitor these developments closely. AI for Legal professionals and those supporting litigation teams may benefit from understanding how AI regulation intersects with product liability doctrine. The AI Learning Path for Paralegals covers document review, contract analysis, and compliance issues relevant to this evolving area.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)