89% of Japanese Artists Call Generative AI a Serious Threat in Survey of 24,991

Japanese creatives sound the alarm: 88.6% call gen-AI a serious threat; 93.3% fear lost work. They want consent, transparency, fair pay-and tighter contracts now.

Categorized in: AI News Creatives
Published on: Feb 05, 2026
89% of Japanese Artists Call Generative AI a Serious Threat in Survey of 24,991

89% of Japanese artists say generative AI is a serious threat

A large-scale survey from the Freelance League of Japan (FLJ) shows a clear signal: most Japanese creatives feel exposed. Out of 24,991 respondents, 71.3% were visual artists-illustrators (54.2%), manga artists (15%), and animators (2.2%).

This group isn't guessing. They're seeing shifts in briefs, budgets, and timelines because of Generative AI and LLM. The concern is broad and specific: threat to income now, and loss of work tomorrow.

The numbers that matter

  • 88.6% agree generative AI is a "serious threat" to their livelihood (65.3% strongly agree; 23.3% somewhat agree).
  • 93.3% are worried about losing current or future work due to gen-AI.
  • About 12% say their earnings have already decreased because of gen-AI.
  • Roughly 10% have secured alternative income streams outside creative work.
  • 92.8% want AI developers legally required to disclose copyrighted works used in training data.
  • 61.6% want prior permission required for machine learning on creative works; 26.6% prefer learning "prohibited in principle."
  • On revenue sharing models tied to AI training or usage, 33.3% say they can't agree with any option offered.

What creators want from policymakers

  • Mandatory transparency for training datasets, including copyrighted works.
  • Clear labeling of AI output and guidelines for when output is illegal.
  • A fair revenue-sharing framework, if training or commercial use involves copyrighted material.
  • Updated labor policies that reflect AI-driven market shifts.
  • An independent governing body to oversee compliance and disputes.

What this means for your practice

You can't wait for perfect legislation. Tighten your operating system now so you keep leverage with clients and platforms.

  • Contracts: Add clauses that ban training on your work without explicit consent, define AI usage limits, and set penalties for misuse.
  • Licensing: Offer clear, tiered licenses (personal, commercial, training prohibited) and charge accordingly.
  • Portfolio control: Share lower-res or watermarked previews; deliver high-res behind gated links with expiring access.
  • Data tracking: Log where your work is posted and used. Keep dated proofs and hashes for takedowns or claims.
  • Client education: Spell out the risks of AI-trained lookalikes and why your process reduces brand/legal risk.
  • Differentiation: Package your style with process, story, and collaboration-things that templates can't replicate.
  • Community: Join or form collectives for shared contract templates, enforcement funds, and collective bargaining.
  • Income mix: Build direct channels (newsletter, memberships, limited editions, workshops) so fewer people can cut you out.
  • Policy watch: Track opt-out mechanisms and registry tools as they emerge; opt out where available until terms are fair.
  • Upskill with intent: Use AI for Creatives resources to speed low-value tasks while keeping your signature work human-led.

If you're reworking your skill stack and offers, a structured path helps. See curated options by role at Complete AI Training - Courses by Job.

Bottom line

This isn't a blip. The survey shows widespread, grounded concern-and a push for transparency, consent, and fair pay. Treat your IP like inventory, your contracts like code, and your audience like your safety net.

Hold your line on consent. Price your rights. Build systems that make your work harder to copy and easier to pay for.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)