Large study shows AI fuels creativity by encouraging exploration, not efficiency alone

AI that invites exploration beats one 'best' pick, a Swansea study finds. Showing offbeat and even flawed ideas made people spend longer and create better designs.

Categorized in: AI News Science and Research
Published on: Dec 27, 2025
Large study shows AI fuels creativity by encouraging exploration, not efficiency alone

AI Works Best as a Creative Collaborator, Study Finds

New research from Swansea University points to a clear takeaway for anyone building or using AI in Design: systems that invite exploration outperform those that push for efficiency alone.

In one of the largest online experiments on human-AI co-creation to date, more than 800 participants used an AI-assisted tool to design virtual cars. When the AI surfaced diverse suggestions-spanning effective, unusual, and even intentionally flawed concepts-people spent more time on the task, produced stronger designs, and reported higher engagement.

Exploration over optimization

Many design tools quietly optimize and hand over a single "best" option. This study used a different approach based on MAP-Elites, surfacing a gallery that covered a wide range of ideas rather than one target solution.

The diversity was intentional: effective, offbeat, and imperfect examples sat side by side. Seeing "bad" options in the mix helped participants break fixation, question early assumptions, and push into fresh territory.

What changed when AI suggested options

  • Participants invested more time and attention in the task.
  • Final designs improved in performance and variety.
  • People felt more involved and reported a stronger sense of agency.

Direct quote from the research team: "People often think of AI as something that speeds up tasks or improves efficiency, but our findings suggest something far more interesting. When people were shown AI-generated design suggestions, they spent more time on the task, produced better designs, and felt more involved. It was about creativity and collaboration."

Time to rethink how we evaluate AI design tools

Click-throughs and copy rates miss the point. The study argues for broader evaluation that includes how AI affects curiosity, exploration, and user confidence-alongside output quality.

In short: measure what matters to creative work, not just what's easy to log.

Practical guidelines for researchers and product teams

  • Prefer galleries over single "best" picks. Show a spread-effective, unconventional, and flawed-to widen the search space.
  • Label dimensions. Make it clear how options differ (e.g., stability, speed, aesthetics) so users can steer exploration.
  • Build in contrast. Put opposites side by side to disrupt fixation and trigger new lines of thought.
  • Solicit reflection. Prompt users to explain why they chose an option; this strengthens intent and learning.
  • Track exploration health. Encourage revisiting, branching, and comparisons instead of quick acceptance.

What to measure beyond clicks

  • Exploration breadth: number of unique directions considered, not just variants on one idea.
  • Time-on-task with intent: time paired with active comparisons and edits, not idle dwell.
  • Affective state: interest, confidence, and perceived ownership of the outcome.
  • Quality under diversity: performance of final designs across different constraints, not just a single score.
  • Fixation checks: early lock-in vs. later-stage pivots and revisits.

Where this helps

  • Engineering and architecture: concept generation before narrowing to constraints.
  • Game and product design: style exploration and mechanic variants.
  • Music and media: structured variation to trigger new patterns and themes.

Study details

The experiment used the Genetic Car Designer Game to collect large-scale interaction data. The AI surfaced design galleries via MAP-Elites, a quality-diversity approach that maps high-performing solutions across behavior niches rather than chasing a single optimum.

Full citation: "From Metrics to Meaning: Time to Rethink Evaluation in Human-AI Collaborative Design," ACM Transactions on Interactive Intelligent Systems, March 7, 2024. DOI: 10.1145/3773292.

For teams building human-AI workflows

If you're formalizing training or processes for collaborative AI in research settings, see curated AI learning paths by job role: AI Learning Path for UX/UI Designers.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)