Weekend Reads: Open science benefits, a misconduct probe in Korea, and the risk of outsourcing reviews to AI
Happy 2026. Here's the first Weekend Reads of the year with what mattered in research integrity, publishing, and the practice of science.
This week's highlights
BMJ pulled a clinical trial after finding severe issues with randomization. The signal: allocation and blinding are still weak points in trial design and reporting.
Retraction Watch hit 15 years, and its Center for Scientific Integrity expanded its remit. Another note from the trenches: a "data lost in a flood" case where the excuse held up - a reminder to audit data retention and backup policies.
Trackers you'll care about: the Hijacked Journal Checker passed 400 entries, the database crossed 63,000 retractions, COVID-19 retractions climbed past 460, and the mass resignations list reached 47.
Elsewhere in research integrity and publishing
- A major analysis reports sparse proof that "open science" is delivering measurable benefits.
- Korea University is probing alleged misconduct in papers linked to a politician's daughter and possible preferential treatment.
- Teams leaning on AI for literature reviews risk losing tacit knowledge that only comes from hands-on synthesis.
- Publish-or-perish pressure may be pushing retractions up, experts argue.
- Indexing and ranking debates keep collapsing rich research activity into one metric - and that distorts behavior.
- External oversight could push journals to address integrity issues they've let slide.
- A prominent environmental health journal disappeared, but it's reportedly in transition.
- A research librarian compiled a 2025 bibliography of genAI-fueled research fraud.
- New work maps and ranks institutions based on the output of affiliated authors.
- A professor says he was fired for calling out entitlement and plagiarism.
- Too few archaeologists (so far) are weighing in on peer review fixes; different fields need different evaluation models.
- Another year, another string of scandals, and still little institutional soul-searching.
- A Lund University professor blamed fake references in a tenure file on a copy/paste error.
- Changes to journal ratings appear to influence author diversity and study characteristics.
- Evidence of sex bias in peer review and citations carries implications for how we evaluate work.
- Podcast: "Peer review is broken," featuring Melinda Baldwin and Serge Horbach.
- A former Stanford researcher received probation for data tampering, including insults like "doctor too stupid."
- Metascience: research incentives operate like markets - at micro and macro levels.
- "Probable predatory publishing" hotspots identified across nations, institutions, and disciplines.
- UK publishing deals with the "big five" called a key milestone.
- Human experts found AI failed key steps in a scoping review on neural mechanisms of cross-education.
- ORCID use appears to improve visibility and retrieval of Arab university publications. If your team hasn't adopted ORCID, start here: orcid.org.
- Using retracted articles to train reviewer "rhetorical sensitivity" might improve feedback quality.
- LLMs as synthetic social agents: promising tools, but validity and oversight need tight controls.
Why this matters for your lab or department
The incentives are loud. The guardrails are soft. That's why the same problems - weak randomization, missing data, sloppy authorship, predatory outlets, and overreliance on single metrics - keep surfacing.
If your team uses AI for reviews, set a line. Let models assist with routing and deduplication, but make humans do synthesis, coding, and claims. Keep an AI disclosure log and preserve prompts/outputs in your project archive. For structured training, see this resource: Latest AI courses.
Backups aren't optional. Document retention schedules, test restores quarterly, and store raw data plus codebooks offsite. Write this into your DMP and grant handoffs.
Authorship and affiliations need a checklist. Confirm names, order, contributions, and institutional wording before submission. Require ORCID for all co-authors and link it in your internal roster.
Stop treating index inclusion and journal ranks as quality proxies. Use method checks, data availability, and protocol preregistration as decision inputs. Bundle COI statements and open materials into your internal review steps.
Quick actions for the week
- Trial teams: re-run your randomization and blinding audit on current protocols.
- Group leads: publish a one-page AI use policy for literature reviews and evidence syntheses.
- Admin: confirm offsite backups exist and execute a test restore.
- All authors: add ORCID to your email signature and lab website; link IDs in your next submission.
- Editors/board members: push for independent integrity reviews on high-risk submissions.
Oopsie of the week
A published paper credited "Nelson Mandela" as a co-author instead of the "Nelson Mandela African Institution of Science and Technology." Run an authorship and affiliation check before submission and again at proofs.
Upcoming talks
- "Maintaining Integrity in Peer-Reviewed Publications," Jefferson Anesthesia Conference 2026 - February 2, Big Sky, Montana
- "Scientific Integrity Challenged by New Editorial Practices" - February 12, virtual
Your membership also unlocks: