"AI slop" and the trust problem: Blue Archive producer's blunt take for creatives
Generative AI is flooding creative work with mixed results, and audiences are noticing. In a New Year's interview, Blue Archive producer Yongha Kim called out the core issue: reckless use creates "AI slop," weakens authenticity, and drives consumer distrust. The takeaway is simple-if quality drops or intent disappears, backlash is guaranteed.
Kim's stance isn't anti-AI. It's pro-craft. He points out that today's transformer and diffusion tools are simulators, not minds with taste or intent. If teams hit "generate" and ship, they trade trust for speed-and fans can feel it.
What "AI slop" looks like
Kim uses a snack analogy: beautiful packaging with a half-empty bag. That's how audiences experience low-effort AI-flashy at first glance, hollow on delivery. The result is suspicion, even for good work, because the baseline expectation shifts downward.
He also notes a higher standard in subculture-heavy genres. Those audiences care deeply about voice, lore, and continuity. They can tell when work lacks the fingerprints of a real creative mind.
Will AI replace human-made work soon?
Short answer from Kim: not right now. He says today's tools still can't consistently hit production-level requirements, though their utility is growing. In the long run, replacement is an open question-but only if quality, intent, and legal clarity catch up.
How his team uses AI (without losing the plot)
Nexon Games' IO division applies AI where it reduces busywork-voice recognition, speech synthesis, and practical assistive tasks. Instead of reorienting everything around AI, they identify developer needs first, then build tools that match those needs. The goal: free humans to spend more time on creative decisions, not to outsource the creative core.
For creatives: a practical playbook to avoid "AI slop"
- Define the human intent first. Write a one-paragraph creative intent for each asset or scene: mood, theme, constraints, non-negotiables. All AI use must ladder up to that.
- Use AI to remove friction, not authorship. Offload transcription, cleanup passes, alt variations, placeholder VO, timing, or batch formatting. Keep idea generation, style decisions, and final passes human-led.
- Set a quality bar. Create a checklist for narrative coherence, stylistic consistency, originality, and "feels made by us." If a piece doesn't clear the bar, it doesn't ship.
- Keep the voice consistent. Build a reference bank: tone guides, visual bibles, melody/style sheets, past canonical work. Make every AI prompt reference these assets.
- Provenance and permissions. Track data sources, training disclosures, model licenses, and consent for any likeness/voice. If you can't explain where it came from, don't use it.
- Human review with veto power. Final approval belongs to a human owner who can reject outputs that feel hollow-even if they "look fine." Taste is the product.
- Ship with receipts. Where appropriate, share process notes or BTS snippets to show the human role. Transparency rebuilds trust.
Where AI helps without gutting authenticity
- Audio: scratch VO, line clean-up, timing adjustments, basic sound pass.
- Art: background variations, palette studies, rough comps for exploration, texture tiling.
- Writing: outline expansion, alt taglines, grammar passes, beat mapping.
- Game dev: naming variations, boilerplate code, test asset generation, log parsing.
- Operations: speech-to-text for meetings, asset labeling, pipeline automation.
Legal and ethical guardrails
- Copyright and likeness. Get documented consent for voices, faces, or distinctive styles. Avoid unlicensed style mimicry.
- Attribution and credits. Credit human creators and clarify AI-assisted roles where relevant.
- Bias and safety. Test outputs against a red-flag list (sensitive content, stereotypes, private data leaks). Lock down prompts and logs accordingly.
- Data hygiene. Prefer models with clear licensing. Keep a log of model versions, settings, and sources for audits.
A note on the tech: simulators, not souls
Kim calls current systems "simulators." That maps to how transformers and diffusion models work: they predict or iteratively refine patterns from training data rather than create with intent. Useful, powerful tools-still missing taste, context, and responsibility.
Your authenticity checklist
- Concept integrity: Does it reflect the original idea and emotional goal?
- Voice fit: Would your audience instantly recognize this as your style?
- Originality: Does it bring a new angle, not just a remix?
- Continuity: Does it align with existing canon, lore, or brand rules?
- Human fingerprints: What decisions clearly required taste and judgment?
Practical team policy (simple and enforceable)
- Declare intent: Every task includes a human-written brief and reference set.
- Choose where AI is allowed: List approved use cases and banned ones.
- Track provenance: Maintain a lightweight log: model, settings, prompts, sources.
- QA gate: Human-owned review with clear pass/fail criteria tied to audience expectations.
- Iterate fast: Weekly review of what helped, what hurt, and what to stop doing.
The consumer POV you can't ignore
Audiences will forgive tool choices; they won't forgive feeling cheated. If the "bag of snacks" is mostly air, the brand loses trust-and trust is hard to win back. Kim's message lands: AI has a place, but not as a shortcut around taste, intent, and craft.
Bottom line
Use AI to cut non-creative labor, not the creative heart. Make process and provenance clear. Hold the line on quality and voice. That's how you keep the bag full-and your audience with you.
If you're building a team playbook or upskilling into AI-assisted workflows, this curated hub can help: AI courses by job.
Your membership also unlocks: