Workslop is wasting researchers' time-and it's building AI resentment
Corporate AI is supposed to boost productivity. Instead, many teams are flooded with "workslop": AI-generated content that looks polished but doesn't move the work forward. Think memos padded with words like "underscore" and "commendable," or reports stuffed with em dashes that feel hollow on a close read.
Researchers from Stanford's Social Media Lab and BetterUp describe workslop as content that masquerades as good work without adding substance. It spreads laterally among peers, up to managers, and down to direct reports. In one survey, 40% of respondents received workslop from a colleague in the past month. The result: rework, extra meetings, and doubt about colleagues' judgment-and about AI itself.
Why this hits science and research hard
- It bloats literature reviews with generic summaries that miss key mechanisms or caveats.
- It weakens methods and results sections with vague claims and missing parameters.
- It increases reviewer fatigue and slows grant, IRB, and manuscript cycles.
- It risks misinterpretation of findings and reputational damage for labs and institutions.
How to spot workslop fast
- Vague verbs and filler: "leverages," "underscores," "is commendable," with no concrete data or effect sizes.
- Structure without evidence: neat headings, zero citations, or citations that don't support the claim.
- Hallucinated or broken references: missing DOIs, wrong authors, or journals that don't exist.
- Inconsistent units or stats: mismatched Ns, p-values without tests, or methods that can't reproduce the results.
- Overuse of em dashes and formal clichés that add bulk but no clarity.
Set a policy that reduces friction, not output
- Define "done": For any AI-assisted document, list the minimum evidence required (data, parameters, citations, figures, acceptance criteria).
- Require provenance: Authors must state where AI was used and paste prompts in an appendix or commit log.
- Use a short review checklist: Claims-evidence match, citation validity, reproducibility notes, and decision-ready summary.
- Ban AI for high-risk sections: conclusions, safety assessments, ethics statements, and statistical interpretations without human sign-off.
- Time-box generation: If a draft isn't clear after two prompt iterations, switch to human-first writing.
A lightweight protocol for AI-assisted writing
- Start with a facts pack: objectives, audience, data tables, constraints, and the one decision the reader must make.
- Prompt for structure, not prose: ask for an outline with the exact claims and required evidence.
- Force citations with checks: require DOIs/PMIDs and verify them before any editing pass.
- Apply a "3-question" clarity test: What's the claim? What's the evidence? What should the reader decide?
- Finish with a human synthesis: add limitations, assumptions, and next steps in your own words.
Metrics that matter
- Rework rate: percentage of AI-assisted drafts that need major revision.
- Time to clarity: minutes from handoff to "reader can explain the core claim."
- Error density: citation errors, stat inconsistencies, or missing parameters per 1,000 words.
- Team sentiment: monthly pulse on AI usefulness vs. frustration.
Governance and transparency
- Label AI-assisted sections and archive prompts alongside drafts for audit.
- Store intermediate outputs to trace where errors entered.
- Document model and version used; note any fine-tuning or RAG sources.
- Disclose conflicts when AI summarizes your own prior work or competing research.
Training that actually reduces workslop
- Quality bar: examples of slop vs. substance from your field; why one works, why the other fails.
- Prompt craft: constraints, data-first prompts, and refusal handling.
- Fact-check drills: DOI validation, stat replication, unit consistency checks.
- Toolchain: editor extensions, bibliographic managers, and notebook linting for citations and stats.
- Policy rehearsal: role-play handoffs with the review checklist and time-boxed iterations.
If your team needs a practical starting point for prompts and review workflows, see this prompt engineering collection at Complete AI Training: Prompt Engineering.
A 10-minute pre-send checklist
- Claim-evidence table present and complete.
- All references validated with DOIs/PMIDs; no dead links.
- Numbers add up: Ns, units, tests, p-values, CIs, and assumptions stated.
- A one-paragraph decision summary at the top.
- AI use and prompts disclosed; sensitive sections human-written.
Bottom line
AI can speed parts of research work, but unchecked workslop adds drag and erodes trust. Clear policies, data-first prompts, fast verification, and transparent provenance cut waste. Do less decoration, more decision-ready writing-and the resentment fades.
Background on the "workslop" concept: Stanford Social Media Lab.
Your membership also unlocks: