News writers rarely use human-like language to describe AI, study finds

News writers rarely describe AI as thinking or knowing, an Iowa State study of 20 billion words found. Mental verbs like "understands" appeared far less often than assumed.

Categorized in: AI News Science and Research
Published on: Apr 19, 2026
News writers rarely use human-like language to describe AI, study finds

News Writers Rarely Anthropomorphize AI, Study Finds

Researchers at Iowa State University analyzed how journalists describe artificial intelligence and found that mental verbs - words like "knows," "thinks," and "understands" - appear far less often in news writing than in everyday conversation.

The study examined more than 20 billion words from English-language news articles across 20 countries, searching for instances where AI and ChatGPT were paired with human-like language. The findings challenge assumptions that news coverage routinely attributes human qualities to machines.

The Risk of Anthropomorphic Language

When writers describe AI as if it "knows" something or "decides" to act, they risk creating false impressions about machine capabilities. AI produces responses by analyzing data patterns, not by forming thoughts or making conscious decisions.

This language choice matters because it can obscure who actually bears responsibility for AI systems. Developers, engineers, and organizations build and deploy these technologies - not the systems themselves.

"Certain anthropomorphic phrases may even stick in readers' minds and can potentially shape public perception of AI in unhelpful ways," said Jeanine Aune, a teaching professor of English at Iowa State.

What the Data Actually Shows

The researchers found that "needs" appeared most frequently with AI references, showing up 661 times in the corpus. For ChatGPT specifically, "knows" was the most common pairing, but it appeared only 32 times.

Editorial standards likely explain the restraint. Associated Press guidelines discourage attributing human emotions or traits to AI, and journalists appear to follow this convention.

Context matters more than the words themselves. When writers say "AI needs large amounts of data," they're describing a basic requirement, similar to how they might describe a car or recipe. The phrase doesn't imply the system has desires or consciousness.

Anthropomorphism Exists on a Spectrum

Not all uses of mental verbs are equivalent. Some phrases come closer to suggesting human-like qualities than others.

A statement like "AI needs to understand the real world" implies expectations tied to human reasoning or awareness. These uses go beyond simple descriptions and begin to suggest deeper capabilities than the system actually possesses.

"Anthropomorphizing isn't all-or-nothing and instead exists on a spectrum," Aune said.

Why This Matters for Your Work

The research shows that word choice shapes how readers understand AI systems and their actual capabilities. For professionals writing about AI - whether in technical documentation, news, or research - precision in language directly affects how audiences interpret the technology.

The findings suggest that even infrequent uses of anthropomorphic language warrant attention. Writers benefit from understanding not just what words they use, but how those words might influence perception.

The study was published in Technical Communication Quarterly and included researchers from Iowa State University, Brigham Young University, and the University of Northern Colorado.

For those looking to deepen their understanding of how AI systems actually work - and how to communicate about them accurately - Generative AI and LLM Courses provide grounding in the mechanics behind these systems. Prompt Engineering Courses also offer practical training in precise language and communication with AI systems.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)