Heavy AI use makes writing more neutral and less personal, study finds

Heavy AI use makes writing blander and shifts its meaning, a peer-reviewed study found. Essays from heavy AI users were neutral 69% more often and contained 50% fewer pronouns than those written with little or no AI help.

Categorized in: AI News Science and Research
Published on: Mar 21, 2026
Heavy AI use makes writing more neutral and less personal, study finds

Heavy AI use makes writing blander and changes its meaning, study finds

Researchers from Google and West Coast universities found that people who heavily rely on large language models produce essays that diverge significantly in substance from those written by people who use AI minimally or not at all.

The study, which has been peer-reviewed and accepted to an upcoming workshop at a leading AI conference, examined how 100 participants answered the question "Does money lead to happiness?" Some used AI systems extensively. Others used them lightly or not at all.

Heavy AI users submitted essays that answered with a neutral response 69% more often than participants who avoided AI or used it only for minor edits. Participants who used less AI wrote essays that were more passionate-either positively or negatively-about the relationship between money and happiness.

The shift toward impersonal language

Beyond altered meaning, heavy AI reliance changed the style of the writing itself. Essays from heavy AI users contained 50% fewer pronouns and included fewer anecdotes and references to human experiences. The language became more formal and less personal overall.

Natasha Jaques, a computer science professor at the University of Washington and senior research scientist at Google DeepMind, led the research. She said the findings reveal a fundamental problem with how current AI systems work.

"The LLMs are pushing the essays away from anything that a human would have ever written," Jaques said. "They just change human writing in a way that's very large and very unlike what humans would have done otherwise."

After completing the experiment, participants who heavily relied on AI reported that their essays were significantly less creative and less in their own voice. Yet they reported similar satisfaction with their final outputs compared to participants who used AI less-a disconnect that concerns researchers about long-term effects.

How AI edits differ from human edits

The research team also compared how AI systems edit existing writing versus how humans do it. They used essays published in 2021, before widespread LLM adoption, and asked three leading AI systems-Claude 3.5 Haiku, GPT-5 Mini, and Gemini 2.5 Flash-to revise them based on human feedback from the original dataset.

The AI systems made much larger edits than human editors faced with the same task. While human editors typically substituted individual words and preserved most original vocabulary, the AI systems replaced a much larger fraction of the text.

"This substitution of words contributes to the loss of individual voice, style, and meaning, as the unique lexical fingerprint of each writer is overwritten by the given model's preferred vocabulary," the authors wrote.

Thomas Juzek, a computational linguistics professor at Florida State University who was not involved in the research, called the paper a valuable contribution to a growing area of study. He flagged a particular concern: people often think they're simply using AI for grammar checking when the system is actually doing much more.

Why this happens

Jaques suggested that AI systems' language-altering behavior may stem from how they are trained. Models rewarded for satisfying human feedback have no boundary between satisfying users and actually changing their preferences.

She compared it to how YouTube recommendations can shift what people want to watch. As more researchers rely on AI to write, these effects could alter conclusions in ways that already affect existing institutions.

"Humans care about clarity, relevance, and impact, while AI cares about scalability and reproducibility," Jaques said. "It's changing our conclusions in ways that are already affecting our existing institutions."

Jaques said she avoided using AI to write the new paper itself. Instead, she uses LLMs as a starting point-writing rough drafts in conversational style, feeding them to an AI system, then using the output as motivation to write the piece herself.

The research examined generative AI and LLM systems and their broader effects on human communication. For professionals in research fields, the findings suggest that careful consideration of when and how to use these tools remains essential.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)