How journalists describe AI shapes what the public believes it can do
A study of over 20 billion words from news articles across 20 countries found that journalists rarely use human-like language when describing artificial intelligence-but when they do, the effect distorts public understanding of what these systems actually are.
Researchers from Iowa State University analyzed expressions such as "thinks," "knows," and "understands" paired with terms like "AI" and "ChatGPT." The findings reveal a spectrum of language choices, from neutral technical descriptions to phrases that suggest machines possess intention or awareness.
Most journalists choose careful language
The word "needs" appeared 661 times in the study, typically in neutral contexts: "AI needs large amounts of data." This phrasing describes a technical requirement without implying human traits.
By contrast, "ChatGPT knows" appeared only 32 times. Researchers stress that even infrequent use of such phrases can shape how readers perceive technology. A machine that "knows" something carries different implications than one that simply processes information.
The difference matters. Saying "AI needs data" is like describing a car that needs fuel. Saying "AI must understand the world" assigns the machine intention, curiosity, even hints of consciousness-qualities it does not possess.
Language choices carry real consequences
When readers absorb descriptions of AI as something that "understands" or "wants," expectations rise beyond what these systems can deliver. Public discourse swings between extremes: apocalyptic fears of AI dominance or utopian visions of machines solving all human problems.
Anthropomorphic language also obscures who bears responsibility for AI systems. It shifts focus away from programmers, engineers, and the companies that built and deployed the technology.
For PR and communications professionals, this matters directly. The language your organization uses when discussing AI shapes how stakeholders, customers, and the public perceive your technology and your accountability for its outcomes. See AI for PR & Communications for guidance on messaging strategy.
A spectrum, not a binary
Anthropomorphism in AI coverage exists on a spectrum. Neutral technical descriptions sit at one end. Statements implying machines think sit at the other. Most journalism falls toward the careful end of that spectrum.
The researchers offer a practical test: does this sentence suggest the machine has intentions? If yes, it may need revision.
The responsibility question
When language leads people to believe AI "understands" or "decides," it becomes easy to expect machines to trade stocks, win conflicts, or eliminate poverty. It becomes equally easy to fear that AI will dominate humanity.
In reality, these systems contain only what humans put into them: data, code, goals, and errors. They have no desires and no awareness.
Anthropomorphizing AI is not an innocent shortcut. It shifts responsibility from humans-where it belongs entirely-onto machines. The more an organization speaks about AI as if it were human, the harder it becomes to recognize where accountability actually lies.
For communications teams, the challenge is clear: describe what AI does with precision, acknowledge what built it and why, and resist the pressure to make machines sound more capable or conscious than they are. Your credibility depends on it.
Your membership also unlocks: