Courts Playing Catch-Up: Generative vs Agentic AI and the Fight Over Accountability

AI is outpacing case law, raising risks around data, copyright, and accountability. Why the generative vs. agentic split matters-and what courts expect next.

Categorized in: AI News Legal
Published on: Nov 11, 2025
Courts Playing Catch-Up: Generative vs Agentic AI and the Fight Over Accountability

AI Is Forcing New Legal Questions

Artificial intelligence is moving fast enough to outpace case law. It's improving productivity, but it's also tied to environmental costs, classroom confusion, and a growing stack of legal risks. Without a landmark ruling or deep precedent, many questions around copyright, IP, and liability are still open. For legal teams, that means higher uncertainty-and more room for preventable mistakes.

Generative vs. Agentic AI: Why the Distinction Matters

Not all AI is built the same, and that matters in court. Generative systems create content based on prompts. Agentic systems pursue goals, make decisions, and don't rely on constant human prompts or oversight-think autonomous vehicles, virtual assistants, or task-focused copilots.

IBM frames agentic AI as decision-focused rather than content-creating. That framing is already influencing legal analysis of intent, use, and accountability. See IBM's overview for a concise reference: Agentic AI (IBM).

  • Generative AI: liability often turns on how content was trained, what it outputs, and whether it resembles protected expression.
  • Agentic AI: liability leans into decision-making, automation scope, autonomy, and foreseeability of harm.
  • Responsibility can run user-to-company, user-to-user, or user-to-non-user, depending on who provided data, prompts, access, or outputs.

Case Spotlight: Thomson Reuters v. Ross Intelligence

Westlaw's owner alleged that Ross Intelligence used Westlaw headnotes-procured through third parties-as training material to build a competing legal research tool. According to filings, LegalEase sold Ross about 25,000 "Bulk Memos," prepared with guidance tied to Westlaw headnotes. The crux: access to Westlaw-derived material was used to train Ross's system.

The court emphasized that Ross's tool was not "generative AI," and that its process mirrored Westlaw's functional approach to producing legal research output. That supported a finding that Ross built a market substitute using Westlaw's protectable material. After earlier motions were denied, the court ultimately ruled for the plaintiff on key issues, underscoring how intent, use, market impact, and the system's type (generative vs. agentic) shape outcomes.

Why This Matters

  • Data sourcing is now evidence: Where training or reference data came from can make or break a defense.
  • Functional similarity isn't a safe harbor: If protected expression is used to build a competitor, risk increases-even if the end product looks "research-like."
  • System type colors the analysis: Courts are distinguishing decision-making tools from content generators.
  • Market substitution is a red flag: If your tool substitutes for the rightsholder's product, expect heightened scrutiny.

Courts Are Also Policing AI Misuse in Filings

Judges are sanctioning lawyers for fabricated citations created by AI tools. Recent matters include a Massachusetts sanction and a filing in an Alabama bankruptcy case where the firm apologized for inaccurate, non-existent citations generated by AI. The message is simple: authenticity and verification are non-negotiable.

Expect more strictures on certification of authorities, disclosure of AI assistance where required, and possible Rule 11 consequences for made-up cases. If an AI tool touches your brief, you own the results.

Practical Steps for Legal Teams

  • Classify your AI stack: For each tool, label it as generative or agentic; note use cases (drafting, research, analytics, decision support), and whether it trains on your inputs.
  • Lock down data provenance: Ban training on proprietary or licensed databases without explicit rights. Require vendors to disclose training sources and grant indemnities.
  • Replicate without infringing: If you're building internal tools, avoid ingestion of protected headnotes, editorial enhancements, or gated databases. Prefer licensed corpora or public domain materials.
  • Set verification defaults: No AI-generated citations without human validation and source documents. Require pin cites, docket links, and PDF copies before anything reaches the court.
  • Tighten confidentiality controls: Disable chat history retention where possible, use enterprise instances, and block model providers from using client data for training.
  • Paper the vendor relationship: Add representations on data sources, opt-out of model training on your data, secure IP warranties, and predefine incident response for output errors or infringement claims.
  • Create an AI register: Track who is using which tools, for what matters, with what data, and under which settings. Audit quarterly.
  • Train your team: Give lawyers and staff a clear playbook for acceptable AI use, verification, and disclosure. For structured upskilling, see AI courses by job.
  • Monitor emerging decisions: Focus on cases dealing with training data, market substitution, provider liability, and terms-of-service circumvention.

What to Watch Next

  • Training data lawsuits: More claims over ingestion of proprietary or paywalled content.
  • Provider vs. user liability: Courts parsing who bears risk across the toolchain.
  • Embedded AI in legal tools: Research platforms integrating models will face scrutiny on how their features are built and sourced.
  • Cross-border compliance: Differing rules on AI risk, privacy, and copyright will complicate deployments.
  • Insurance coverage: Expect debates over IP exclusions, fraud, and professional liability triggered by AI outputs.

Bottom line: Classify your tools, control your data, verify your outputs, and update your agreements. Build your policy now-before your next brief, product release, or procurement cycle puts you on the wrong side of a complaint.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide