Banality, Bloat, and Bias: Why AI Can't Write a Good Paper
LLM prose predicts safe words, not original thought. This guide names 10 failure modes-and how to fix them with evidence, clear causation, consistent terms, and human agency.

Why Serious Writers Still Beat AI: 10 Failure Modes of LLM Prose (and how to fix them)
Many in the humanities are exhausted from grading AI-assisted essays. But letting writing go soft produces weak thinking. If you write for a living, treat this as a field guide to common large language model (LLM) failures - and the habits that keep your work sharp.
LLMs predict the next likely word. That's useful for autocomplete. It's terrible for original thought. Here's how to spot the tells and outwrite the machine.
No. 1: The Polonius problem (stating the banal)
LLMs default to the most obvious take. "Hero's journey." "Tradition vs. modernity." "Individual vs. community." It reads like a summary, not an argument.
- Refuse the first obvious angle. List five takes; pick the least likely but defensible one.
- Replace "X vs. Y" with "Because of Z, X appears to oppose Y - but actually depends on it."
- Ban vague openers ("throughout history," "society shows"). Start with a specific claim.
No. 2: The windbag problem (bloated emptiness)
AI strings comfy phrases that say nothing: "home to," "some of," "world's most," "diverse works." It's pretty, but empty. If a sentence can't be falsified, it's filler.
- Quantify or exemplify. One concrete example beats five glittering adjectives.
- Replace "diverse," "impactful," "important" with the measurable quality you mean.
- Ask: what would prove this wrong? If nothing, cut it.
No. 3: The variation problem (fragmenting unity)
AI avoids repetition by swapping core nouns: one paragraph uses "Sunjata," then "the protagonist," "the central figure," "the key player." Readers lose the throughline.
- Pick canonical names and repeat them. Consistency is clarity.
- Vary verbs and modifiers, not the main subject.
- Read aloud: if you must pause to recall who's who, revert to the proper name.
No. 4: The Roman genitive problem (stringing abstractions)
Expect chains of "x of y" with literary buzzwords: "symbolizing the complexity of narratives in the face of legacies." Grammatical, yet meaningless. No actor, no action.
- Swap nouns for verbs. Who did what, to whom, and how?
- Cap prepositional chains at two links.
- Anchor every abstraction with a specific scene, line, or data point.
No. 5: The causation problem (connecting the unconnected)
AI muddles cause and effect: "This reading reinforces the dichotomy." No - the reading depends on that dichotomy. Glue verbs like "highlights" and "underscores" hide confusion.
- Map causal arrows: X results from Y; Y does not "result from" your interpretation.
- Favor precise relations: "results from," "depends on," "leads to," "contradicts."
- Delete vague glue verbs unless you can specify what changes for whom.
No. 6: The anti-human problem (erasing the interpreter)
AI writes as if texts act: "The folktale exposes contradictions." Texts don't expose; interpreters do. Critical writing needs an accountable thinker.
- Own the claim: "This essay argues…," "I contend…," "The analysis shows…."
- Template: "Claim because Evidence; therefore Consequence."
- Don't anthropomorphize abstractions. Put agency on people.
No. 7: The inflation problem (adjective overload)
Hyper-adjectival prose feels rich but collapses under scrutiny: the wrong modifiers, stacked on abstractions, produce nonsense.
- Draft without adjectives. Add back the 10% that change meaning, not mood.
- Test every modifier: what, exactly, does it commit the sentence to?
- Prefer verbs and concrete nouns over tonal decoration.
No. 8: The racism problem (blaming the victim)
LLMs mirror their datasets. Moralizing tones ("fails Western ethical standards") slide into biased judgments and ahistorical takes. Context disappears; harm follows.
- Ask: whose values are being applied? Are they appropriate to this context?
- Favor primary sources and credible scholarship before drawing moral claims.
- Be careful with "authentic." Define criteria or avoid the term.
No. 9: The plagiarism problem (stolen argumentation)
Derivative theses show up with confident phrasing. AI can echo published arguments - sometimes from the very sources you're analyzing - without citation.
- Search your thesis line verbatim. If it exists, cite it or revise.
- Keep a research log: what you read, what you borrowed, what you changed.
- Use AI, if at all, to generate counterarguments you then verify - not your core claim.
No. 10: The flat-out wrong problem (basic facts twisted)
Hallucinations are common. Even famous, checkable facts get mangled. Confident tone ≠ truth.
- Verify every fact with two independent, reputable sources.
- Prefer primary texts and established references for names, dates, quotes.
- If the claim is new to you and convenient to your argument, double-check it.
The rewrite protocol (how pros fix AI-sounding drafts)
- Highlight the thesis, topic sentences, and conclusion. Do they make one specific claim?
- Strike empty openers and evaluative adjectives. Add concrete evidence.
- Replace synonym churn with consistent terms for key actors and ideas.
- Audit causal verbs. Redraw the logic until each link is testable.
- Reinsert the human interpreter. Make clear who argues what and why it matters.
- Fact-check in a separate pass. Only then polish rhythm and style.
For writers who still use AI (without letting it write)
Treat these tools as interns, not authors. Brainstorm, outline, or generate checklists - then do the thinking and writing yourself. Always verify, specify, and own the argument.
Want structured ways to audit prompts and reduce hallucinations? Explore practical resources on prompt engineering and topic-specific AI tools for copywriting without outsourcing your voice.
Writing is thinking. Keep the thinking human - that's what clients, editors, and readers pay for.