Understanding Color Metaphors: Insights from Humans and ChatGPT
A recent study explored how humans and ChatGPT interpret color metaphors, revealing notable differences between experience-based understanding and language-derived reasoning. Surprisingly, both colorblind and color-seeing participants showed similar abilities in metaphor comprehension, suggesting that direct visual perception is not essential for grasping color-related language. However, painters—individuals with practical color experience—demonstrated superior performance with novel metaphors, indicating that hands-on interaction with color enriches conceptual understanding.
ChatGPT, trained exclusively on textual data, produced consistent and culturally informed responses but struggled with unfamiliar or inverted metaphors. This highlights the limits of AI models that rely solely on language patterns without sensory experience.
Key Findings
- Color Vision Not Required: Colorblind and color-seeing adults performed similarly on metaphor tasks.
- Experience Enhances Understanding: Painters excelled at interpreting novel color metaphors.
- AI’s Boundaries: ChatGPT generated consistent answers but faltered on novel or reversed metaphors.
How Humans and AI Process Color Metaphors
ChatGPT generates responses by analyzing vast textual datasets, identifying patterns, and predicting probable continuations. Common color metaphors like “feeling blue” or “seeing red” are deeply embedded in English, making them familiar to the model. Still, ChatGPT lacks firsthand sensory experience—it has never actually seen a blue sky or a red apple. This absence became apparent when it struggled with novel metaphors such as “the meeting made him burgundy” or tasks requiring inversion of color meanings.
The study raises key questions: Does sensory experience provide humans with an edge in interpreting metaphors beyond language alone? Or can language, by itself, support metaphor comprehension equally for humans and AI?
Study Overview
The research involved online surveys comparing four groups: color-seeing adults, colorblind adults, painters with regular color interaction, and ChatGPT. Participants assigned colors to abstract concepts and interpreted both familiar and unfamiliar color metaphors. They also explained their reasoning behind color choices.
Results showed that color perception did not significantly affect metaphor understanding—colorblind and color-seeing participants had similar responses. Painters, however, outperformed others with novel metaphors, implying that practical experience deepens conceptual links between color and language.
ChatGPT’s responses were consistent and culturally informed. For example, when explaining the expression “a very pink party,” it referenced associations of pink with happiness and kindness. Yet, it less frequently used embodied reasoning compared to humans and encountered difficulties with novel metaphor interpretations and color inversion.
Implications for AI Development
This study emphasizes the challenges faced by language-only AI models in fully capturing human conceptual reasoning, especially regarding metaphors grounded in sensory experience. Integrating sensory data—such as visual or tactile inputs—could improve AI’s ability to process such nuanced concepts.
As one of the study’s lead researchers noted, there remains a clear distinction between mimicking semantic patterns and drawing upon embodied, hands-on experience in reasoning.
About the Research
The interdisciplinary team involved psychologists, neuroscientists, social scientists, computer scientists, and astrophysicists from several institutions, including UC San Diego, Stanford, Université de Montréal, the University of the West of England, and Google's AI research division, DeepMind.
The study was supported by multiple fellowships and grants but was conducted independently of Google’s influence on design or publication decisions.
Reference to Original Research
The study, titled “Statistical or Embodied? Comparing Colorseeing, Colorblind, Painters, and Large Language Models in Their Processing of Color Metaphors”, was published in Cognitive Science. It investigates whether metaphorical reasoning that involves embodied experience, such as color perception, can be learned from language statistics alone.
The findings reveal that while colorblind individuals understand color metaphors similarly to color-seeing individuals—likely due to language exposure—AI models trained solely on text show notable limitations, especially when dealing with novel or inverted metaphors. Painters’ superior performance suggests that embodied experience plays a significant role in conceptual metaphor processing.
For professionals interested in the intersection of AI, cognition, and language, this study provides valuable evidence on the current capabilities and limitations of language-based AI models like ChatGPT.
Explore more about AI capabilities and training courses at Complete AI Training.
Your membership also unlocks: