Tech AI: How creative is AI? Researchers say image models loop into 12 clichés
Credit: Hintze, Arend et al. / Patterns, Volume 0, Issue 0, 101451
Two teams, one goal: see what image models do without humans. The result wasn't wild originality. It was 12 recurring motifs.
Researchers from Dalarna University and Michigan State University ran autonomous loops: one model generated an image, a second model described it, that description fed back to the image model, and the cycle repeated for 100 rounds. They ran this 40 times across four image generators. Instead of exploring, the outputs collapsed into familiar scenes-what the authors called "visual elevator music."
How the test worked
The setup removed human prompts entirely. Image → caption → new image, again and again. Think creative telephone, but with models talking to models.
Across thousands of images, the loop kept landing in the same places. The patterns were consistent across different generators, which hints at a shared bias in training data and sampling behavior.
What kept showing up
- Stormy lighthouses
- Urban night scenes
- Gothic cathedrals
- Palatial interiors
- Bridges
- A lonely tree
- Action-photo aesthetics
The authors describe this as a tendency to favor high-probability outputs over true novelty. Left alone, the systems play it safe.
Why creatives should care
If default loops become default culture, visual work starts to look the same. That's not an AI problem; it's a taste problem.
The researchers warned that widespread use could homogenize visual culture. Translation for your practice: if you let models run on autopilot, they'll pull you toward stock aesthetics.
Practical ways to break the loop
- Ban the usual suspects: use negative prompts to exclude lighthouses, bridges, lonely trees, cathedrals, and "moody urban nights." Force the model away from clichés.
- Add controlled chaos: vary seeds, increase sampling steps, tweak guidance strength, and introduce temperature where applicable. Favor exploration early, refinement later.
- Stage your prompts: concept first, composition second, palette and material third. Make each stage a constraint with a novelty goal.
- Source outside the internet: scan your own textures, sketch thumbnails, photograph found objects, and use them as references or control images.
- Cross-domain mashups: combine distant fields (botanical microscopy + brutalist signage, folk embroidery + aerospace UI). Make friction the point.
- Critic in the loop: add a second model that scores novelty or similarity; discard anything too close to earlier outputs before iterating.
- Write anti-convergence rules: "no centered subjects," "no horizon lines," "no symmetrical framing," "avoid blue-dominant palettes," etc.
- Batch and cull: generate wide, prune hard. Keep edge cases, scrap look-alikes, then iterate only on the survivors.
- Fine-tune your bias: train a LoRA on your studio's board to redirect the model away from stock-photo tropes.
- Insert human checkpoints: after every few rounds, rewrite the brief with a fresh constraint pulled from your references.
Creative takeaway: Constraints create taste. Defaults breed sameness. Your job is to set the constraints that make sameness impossible.
What this says about us
The authors point out that humans repeat themes too-flood myths, spirals, archetypes-shaped by how we think and live. AI's attractors are different: stock photography aesthetics shaped by internet-scale data.
So the question isn't whether convergence exists. It's whether you accept the model's defaults or feed it a better diet.
Want the source?
The research appears in Patterns. You can review the journal here: Patterns (Cell Press). For background on one of the research groups, see the BEACON Center at Michigan State University.
Build anti-convergence into your workflow
If you're training a team to push past safe outputs, start with structured prompting and critique loops. This collection is a fast way to get moving:
Your membership also unlocks: