Same Art, Harsher Verdicts When AI Is Involved

Study finds bias against AI art in copyright suits: jurors often see infringement and award higher damages-even when the work matches a human's. Focus on the work, not the label.

Categorized in: AI News Legal
Published on: Dec 07, 2025
Same Art, Harsher Verdicts When AI Is Involved

AI Artwork Faces Higher Legal Scrutiny, Study Finds

New empirical research points to a consistent bias against AI-generated art in copyright disputes. When factfinders think an AI created the allegedly infringing work, they assign greater culpability and higher damages-even when the input and output are identical to a human's.

Co-authored by Joseph Avery (University of Miami Patti and Allan Herbert Business School) and Mike Schuster (University of Georgia), the study-"AI Artists on the Stand: Bias Against Artificial Intelligence-Generated Works in Copyright Law," published in the UC Irvine Law Review-labels this effect the AI litigation penalty.

The core insight is simple and unsettling: what we can't see (the creative process) changes how we judge what we can see (the final work). As Avery puts it, if a human and an AI do the exact same thing, people still react differently.

What the study did-and found

Participants saw an original copyrighted work, then two identical accused works produced under the same conditions. One was attributed to a human, the other to AI.

The AI-attributed work was judged less ethical, less fair, and lower quality. As mock jurors, participants were more likely to find infringement or plagiarism and to impose greater damages when AI was involved.

The pattern extends beyond copyright. Forthcoming work by Avery suggests similar penalties in patent and trade secret disputes. Causes remain unclear, though the researchers suspect multiple drivers-including a tendency to reward what feels human. They also note the bias could shift as people get used to AI.

Why this matters for litigators and in-house counsel

  • Pleadings and narrative: Expect jurors to scrutinize AI-labeled works. Frame human oversight, intent, and process early. Anchor on substantial similarity, access, and protectable expression-not the tool.
  • Motions in limine: Seek to exclude prejudicial labels like "AI-made" unless probative. Push for neutral wording ("tool-assisted") and limit unnecessary process evidence.
  • Jury instructions: Propose instructions clarifying that the method of creation is not itself evidence of infringement. Ask for a caution against bias tied to AI usage.
  • Demonstratives: Use side-by-side comparisons and blind evaluations to keep the focus on the works-not on the creator label.
  • Expert testimony: Consider a social science or survey expert to explain perceptual bias and present blind testing results.
  • Jury selection: Probe attitudes about AI, creativity, and fairness. Look for strong priors that could magnify the penalty.
  • Trial forum: Where strategy allows, consider a bench trial if you expect strong anti-AI sentiment in the jury pool.
  • Damages strategy: Preempt arguments that AI use warrants higher penalties by tying damages to market harm and traditional factors.
  • Discovery discipline: Be mindful of internal messages that frame AI outputs as "lesser" or "shortcut" work. These can feed bias at trial.

Practical steps before a dispute

  • Document process: Keep records of prompts, human edits, references, and decision-making. Show human contribution and quality control.
  • Governance: Adopt clear policies on training data vetting, licensing, and model selection. Track the provenance of assets.
  • Labeling strategy: Use neutral internal language (e.g., "tool-assisted creation"). Avoid marketing claims that invite moral judgment.
  • Training: Educate creative teams and counsel on copyright basics for AI-assisted workflows, substantial similarity, and fair use boundaries.
  • Contracts and insurance: Tighten warranties, indemnities, and audit rights with vendors. Review coverage for AI-related IP risks.

Evidence and argument themes that land

  • Focus jurors on protectable expression and concrete similarities, not the production method.
  • Address fairness and quality head-on; don't dismiss concerns. Show human intent, supervision, and iterative editing.
  • Offer blind comparisons or surveys to separate perception from label-driven bias.
  • Request specific instructions that the use of AI, by itself, is not evidence of copying or bad faith.

Policy signal for courts and companies

Copyright exists to encourage creative output. If we punish works simply because AI was in the loop, we chill experimentation and narrow the kinds of expression that reach the market.

The bias is real today. It may weaken as exposure to AI grows. Until then, expect the label "AI" to move liability and damages-and plan your case around it.

Further reading

If your team needs a primer on AI concepts without the hype, see this curated list by role: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide