How AI Wraps Disinformation in Language We Trust

AI can make fabricated stories feel clearer and more believable, and readers often prefer them. Writers need stricter sourcing, careful tone, and verification at every step.

Categorized in: AI News Writers
Published on: Jan 18, 2026
How AI Wraps Disinformation in Language We Trust

AI Makes Fake News Feel More Credible: What Writers Need to Know

Artificial intelligence is making fabricated stories feel cleaner, clearer, and-unfortunately-more believable. Linguist Silje Susanne Alvestad and her colleagues are finding that AI-generated disinformation is often rated as more credible and more informative than similar texts written by humans.

That's a problem for anyone who writes for a living. If you use AI in your workflow-or report on sources shaped by it-you need sharper tools and better habits.

What the language of fake looks like

Research drawing on cases like former New York Times journalist Jayson Blair shows clear linguistic tells. When fabricating, he tended to write in the present tense; when reporting facts, he used the past. Fabricated pieces leaned conversational, used shorter words, and relied on emphatic terms like "truly," "really," and "most."

Motivation matters, too. When money is the driver, writers use fewer metaphors. When ideology is the driver, metaphors multiply-especially from sport and war.

The certainty trap

Fake news often sounds more categorical. You'll see heavy "epistemic certainty" with words like "obviously," "evidently," and "as a matter of fact." This pattern shows up across languages but isn't uniform. There's no universal fingerprint. Context and culture shift the signals.

AI changes the game

Alvestad's NxtGenFake project focuses on AI-generated disinformation-content that mixes true and false, tightens context, and slides past basic verification. Early findings show less variety in persuasive techniques compared with human-written propaganda.

Two patterns stand out:

  • Appeal to Authority: AI defaults to generic sourcing-"according to researchers," "experts believe." These claims feel credible but are hard to verify because they lack names, dates, or links. See the fallacy here: Appeal to authority.
  • Appeal to Values endings: AI-written propaganda often closes with value-driven lines about fairness, trust, or growth. It sounds reasonable and forward-looking, which lowers your guard.

Readers prefer AI-written disinfo

In testing with U.S. readers, AI-generated disinformation beat human-written versions on perceived credibility and informativeness. Surprisingly, it didn't score higher on emotional appeal. It just felt clearer and more useful-and people chose to keep reading it.

Practical checklist for writers

  • Demand specific sources: Replace "experts" and "researchers" with names, institutions, dates, and links. If you can't name it, don't claim it.
  • Calibrate certainty: Reserve "obviously," "evidently," and similar markers for cases with hard evidence. Show what you know, what you don't, and how you know it.
  • Watch tense and tone: Present tense for past events, heavy emphatics, and breezy conversational style can signal fabrication-especially in hard news.
  • Interrogate metaphors: Sport and war metaphors can smuggle ideology. Ask why they're there and whether they distort nuance.
  • Audit endings: Value-laden sign-offs that urge action without new evidence are a red flag. Strengthen or cut them.
  • Force verifiability: Add names, links, and quotes that can be checked. If using AI, instruct it to surface sources and then verify them yourself.
  • Balance stance: Mix confident statements with conditional language where appropriate. Readers trust transparency more than bluster.

Editorial guardrails that help

  • Source box: For every substantive claim, keep a mini ledger: who, where, when, and a link.
  • Ban empty authorities: Flag and rewrite any "experts say" phrasing unless it's immediately followed by names and citations.
  • Tense and hedging pass: Add a quick edit pass for tense consistency and overuse of emphatics or certainty markers.
  • Verification ladder: Cross-check at least two independent, named sources for any claim that could mislead.

Why this matters for your workflow

Large language models can wrap half-truths in prose that reads clean and trustworthy. As these tools spread, your advantage isn't writing faster-it's writing with verified clarity and defensible sourcing.

Tools and next steps

  • AI tools for copywriting: Assess tools that help with drafting while keeping verification in your control.
  • Courses by job: Build skills in prompt discipline, fact-checking workflows, and ethical AI use for writing teams.

Bottom line

AI can make falsehoods feel tidy and true. Treat certainty, generic authorities, and values-heavy endings as cues to slow down, source up, and make every claim verifiable. That's the craft advantage readers will keep paying for.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide