US courts tilt toward Big Tech on AI copyright; Meta gains time as South Korea watches

U.S. courts are leaning toward fair use for AI training, with delays that aid Big Tech. Agencies need clear sourcing rules and to track class-certification outcomes.

Categorized in: AI News Government
Published on: Feb 10, 2026
US courts tilt toward Big Tech on AI copyright; Meta gains time as South Korea watches

U.S. Courts Signal a Longer Runway for Big Tech in AI Copyright Fights

On February 5, a federal courtroom in San Francisco heard a class-certification motion in a case brought by writers, including author Richard Cardrey, alleging Meta used roughly 190,000 illegally copied books to train LLaMA. The hearing focused on whether the suit should proceed as a class action, a decision that could multiply Meta's potential liability.

Judge Vince Chabria repeatedly pressed the plaintiffs' lawyers, calling their response inadequate for class approval and warning that proceeding as-is could expose other writers to losses. He also granted Meta's request to delay, opting to watch outcomes in similar cases first. That pause favors a well-funded defendant that can absorb time and legal cost.

Why this matters for government

Public agencies are setting rules for data access, AI procurement, and copyright compliance while the courts test the limits. Early U.S. decisions are leaning toward fair use for AI training, which affects how you draft policy, structure contracts, and plan enforcement. If that trend holds, legislative clarity may be required to reset expectations for creators, platforms, and government buyers.

Current U.S. courtroom signals

Since ChatGPT drew attention to training data, courts have often found AI training to be fair use under 17 U.S.C. ยง107 when use is transformative and not simple copying. Judges have also said AI outputs that apply or transform source content, rather than reproduce it verbatim, don't infringe.

There are limits. Courts have flagged liability where content is obtained unlawfully. In one California case involving Anthropic, fair use for training was recognized, but illegal downloading of books was treated as infringement, ending in payments reportedly set at $3,000 per book and totaling about $1.5 billion.

In last year's case involving Meta and U.S. writers, the court described AI training as fair use and innovative. Major related suits continue against Google, OpenAI, xAI, and Midjourney. Expect more filings as models scale and datasets widen.

Fair use (17 U.S.C. ยง107) remains the central legal test.

Lobbying and litigation resources matter

Legal experts say a pro-innovation posture-especially in California-affects outcomes. Last May, after disputing the idea that AI training is automatically fair use, the U.S. Copyright Office's director was reportedly dismissed within a day.

Big Tech's legal benches are deep. In the Meta hearing, eight defense attorneys appeared, drawing comment from the judge on the growing team size. Axios has reported Microsoft, Amazon, Google, and Meta spent over $100 million annually on lobbying during 2024-2025. The combined effect: time is on their side, and delays are less costly.

South Korea's debate-and global spillover

South Korea is weighing Copyright Act revisions to broadly exempt AI training, prompting pushback from creative groups calling it de facto unlimited learning permission. Media groups have warned of lawsuits over news used to train models, and the Korea Music Copyright Association has raised similar concerns.

Comparative law matters. The EU, UK, and Japan allow text and data mining (TDM) exemptions under conditions, while the U.S. and South Korea lack explicit TDM statutes. As AI adoption grows, the absence of clear TDM rules increases litigation risk and uncertainty for public procurement.

Action items for policymakers and public-sector counsel

  • Codify data access rules: Define permissible training uses, provenance requirements, retention limits, and audit rights in statute or regulation.
  • Procurement safeguards: Require vendors to attest to lawful data sourcing, document licenses, and provide indemnities for infringement claims.
  • Fair use boundaries: Clarify when government-funded projects can rely on fair use versus when licenses are mandatory-especially for high-risk datasets.
  • Evidence standards: Set expectations for training data documentation, chain of custody, and model cards in grant and contract terms.
  • Creator remedies: Consider statutory damages or collective licensing options to reduce case-by-case litigation for books, news, images, and music.
  • Coordination: Align with competition and data protection authorities so copyright, antitrust, and privacy policies don't conflict.
  • Watch the courts: Track class-certification rulings and any limits on fair use tied to illegal acquisition. Adjust guidance as decisions land.

What to watch next

Class-certification decisions in writer and journalist cases against Meta, Google, OpenAI, and xAI. Any ruling that tightens fair use where acquisition is suspect. Movement in South Korea's proposed exemptions and whether they mirror EU-style TDM or take a broader path.

If courts keep favoring fair use for training but punish unlawful sourcing, policy will need to do two jobs at once: protect creators against illicit collection and keep lawful innovation pathways clear for research and public-interest use.

For agencies building AI literacy and procurement muscle, see role-based resources here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)