AI doesn't think-we just talk like it does

Writers often give AI a mind it doesn't have. A new study shows mental verbs can mislead, so assign agency to people and describe systems by what they generate or do.

Categorized in: AI News Writers
Published on: Jan 20, 2026
AI doesn't think-we just talk like it does

Stop Giving AI a Mind It Doesn't Have

Writers lean on mental verbs because they're fast, familiar, and human. Think, know, understand, remember. Useful for people. Risky for machines.

A new study looked at how these verbs show up in news coverage about AI and what that does to readers' perception. The takeaway is straightforward: a single word can nudge readers to see agency, intent, or feelings where none exist.

Why mental verbs mislead

Mental verbs suggest an inner life. Beliefs, desires, intentions. AI systems don't have those. They generate outputs from patterns in data.

"We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines - it helps us relate to them," said Jo Mackiewicz of Iowa State. "But at the same time, when we apply mental verbs to machines, there's also a risk of blurring the line between what humans and AI can do."

That blur does more than confuse. It can overstate competence ("AI decided…") and misplace responsibility. The real decision-makers are the people who design, train, deploy, and supervise these systems.

What the data shows

The research team analyzed the News on the Web (NOW) corpus, a 20+ billion word dataset of English-language news from 20 countries, to see how often mental verbs pair with AI and ChatGPT. They focused on verbs like learns, means, and knows.

  • Pairings were relatively rare in news writing. "Needs" appeared most with AI (661 instances). For ChatGPT, "knows" was most frequent (32 instances).
  • Some uses weren't anthropomorphic at all. "AI needs large amounts of data" treats AI like any system with requirements - much like "the car needs gas."
  • Anthropomorphism exists on a spectrum. "AI needs to be trained" frames obligation. "AI needs to understand the real world" leans into human-like qualities.

"Certain anthropomorphic phrases may even stick in readers' minds and can potentially shape public perception of AI in unhelpful ways," said Jeanine Aune of Iowa State.

Context beats counts

Frequency isn't the whole story. The same verb can be neutral in one sentence and suggestive in another. Passive voice often pulled agency back to humans ("AI needs to be implemented"), which is closer to the truth of how systems are used.

The nuance matters for us: readers make meaning from context, not isolated words. A single phrasing choice can tilt how capable, safe, or autonomous a tool appears.

Practical language guide for writers

  • Assign agency to people. Replace "AI decided to flag users" with "The system flagged users based on rules set by the team."
  • Use observable outputs, not inner states. Swap "ChatGPT knows the topic" for "ChatGPT generated a response consistent with the topic."
  • Prefer neutral verbs. Generated, returned, produced, suggested, classified, ranked, retrieved, matched.
  • Be precise with "needs." Requirement: "The model needs high-quality training data." Obligation: "The model needs to be audited" (by whom? say it).
  • Treat goals as human goals. Instead of "AI wants to reduce bias," write "The team aims to reduce bias using AI-assisted methods."
  • Flag limits. "The system may produce errors on out-of-domain inputs" is clearer than "It struggles to understand."
  • Avoid implying consciousness. Steer clear of verbs like think, know, believe, want, unless you're clearly using metaphor and it won't mislead.

Quick pre-publish checklist

  • Did I credit humans for design, policy, oversight, and accountability?
  • Did I label outputs accurately (generated, predicted) rather than mental states?
  • If I used a mental verb, would a non-expert read it as literal? If yes, rewrite.
  • Did I add concrete context (data, rules, training, constraints) that explains the behavior?

Why this matters for your byline

Clear language sets expectations. It curbs hype, reduces misplaced blame, and builds reader trust. As AI tools become common in reporting, marketing, and technical documentation, precision is a craft advantage.

The study concludes that usage is less frequent - and more nuanced - than many assume. Still, even infrequent phrasing can sway how readers see AI. Our job is to keep the mental life with humans, and describe machine behavior as what it is: pattern-driven outputs shaped by data and design choices.

Sources and further reading

For writers working with AI tools

If you're exploring AI to speed drafts or ideation while keeping your language precise, this curated list can help:


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide