Weekend reads: Fake citations, hidden COIs in psychiatry, and an AI that breaks online surveys

Fake citations, shaky COVID data, hidden COIs, and AI dodging survey checks show the guardrails are loose. Audit references, enforce COIs, set AI rules, and lock down surveys.

Categorized in: AI News Science and Research
Published on: Nov 23, 2025
Weekend reads: Fake citations, hidden COIs in psychiatry, and an AI that breaks online surveys

Weekend brief for researchers: Fake citations, hidden COIs, and an AI threat to online surveys

Another week, another set of reminders that research integrity is a moving target. Retraction Watch highlighted false attributions, shaky COVID-19 data, and AI sneaking past our filters. If you run studies, review papers, or manage research teams, these stories are not background noise - they're signals to tighten your processes.

Here's the practical rundown and what to do about it.

What moved this week

  • Computing society pulls works for "citation falsification" months after a sleuth's defamation conviction.
  • Research integrity conference receives AI-generated abstracts - yes, at a conference about integrity.
  • Study finds AI tools unreliable at identifying retracted papers.
  • Lancet journal retracts a COVID-19 metformin paper almost two years after authors requested a correction.
  • Publisher flags a paper for a fabricated reference, falsely crediting an article that doesn't exist.
  • COVID-19 paper by researchers at Harvard and Duke gets an expression of concern for unreliable data.
  • Exclusive report: A reviewer advised against publishing a paper claiming DNA in COVID vaccines.

Elsewhere in research integrity

  • Top-earning U.S. psychiatry authors had "substantial" undisclosed financial conflicts of interest (COIs).
  • A researcher built an AI agent that defeats most online survey defenses, evading detection 99.8% of the time.
  • More than 200 papers in Korea were retracted for AI-related issues.
  • Elsevier finally retracted the last of five papers first reported more than a decade ago.
  • The BMJ's editor-in-chief wrote on scrutiny in medical research; Dorothy Bishop noted the journal's open data policy is "not a failure."
  • NIH grant cuts disrupted hundreds of clinical trials.
  • Podcast highlight: The Hidden Crisis of Bad Science with James Heathers.
  • Researcher accused of grant misuse won damages after the European Anti-Fraud Office removed an "unlawful" press release.
  • Debates on whether deals to save research funding are good for research, and whether we're ready for a multipolar research system.
  • Spotlights on research integrity in Mexican academia and broader analyses of scientific fraud.
  • In memoriam: The AMA Journal of Ethics ended publication.
  • Lessons from a long road to a first-author paper; systematic reviews on the same topic often miss key standards.
  • Pressure to publish continues to strain integrity, as flagged at the Heidelberg Laureate Forum.
  • Critiques of Global North dominance in publishing and a call to re-communalise how we publish.
  • New looks at impact metrics, peer review models, and a defense of Francesca Gino on a podcast.
  • How English-centric metrics distort global productivity.
  • HHS named authors and released peer review comments for its gender dysphoria report.
  • Notes on scientific writing in the age of AI.
  • And a scholar called for a retraction of a 2018 article that cited The Onion to claim historians fabricated ancient Greece.

What this means for your practice

The pattern is clear: citation fraud, undisclosed money, automated survey abuse, and weak AI screening. Human oversight still matters more than most people want to admit. Here's how to adapt without slowing your team to a crawl.

Action checklist for PIs, lab managers, and editors

  • Audit references before submission or acceptance. Cross-check every citation that supports a key claim. Verify the paper exists, the authors match, and the conclusion cited is accurate. Use PubMed's retraction flags and NLM retraction guidance for quick screening. NLM: Retracted Publications
  • Strengthen COI reporting. Don't rely on self-disclosure. For U.S. authors, spot-check with Open Payments. Require updates at revision and acceptance. CMS Open Payments
  • Set a clear AI policy. Require disclosure of AI use for writing, analysis, or data collection. Ban AI-generated references. Keep raw data and prompts for audit. Assume detectors miss sophisticated use.
  • Protect online surveys. Combine device fingerprinting, IP/country limits, server-side CAPTCHAs, time-based checks, honeypot items, and dynamic attention checks. Pay by verified completion quality, not speed. Review open-ended responses for templated text.
  • Pre-register and share data/code. Pre-registration limits post-hoc storycrafting. Public data/code (with proper de-identification) deters manipulation and speeds peer verification.
  • Use two-tier review on high-stakes claims. One reviewer on methods and data integrity, one on novelty/impact. Require authors to justify each key inference with traceable evidence.
  • Plan for slow corrections. Retractions and corrections can lag for months or years. Keep a living document of citations your lab depends on and re-verify them quarterly.

If you run a journal or conference

  • Require raw data availability statements and spot-audit a random subset each issue.
  • Screen abstracts for AI-written patterns, then confirm with author attestations and sample raw data. Don't rely on detectors alone.
  • Publish clear COI enforcement steps. Non-disclosure triggers corrections, editorial notes, or retractions - and applies to all authors.

Team prompts for your next lab meeting

  • Which 10 citations are most critical to our current manuscripts, and have we verified them this month?
  • Where could AI have silently touched our workflow (writing, stats, survey responses), and how do we document that?
  • Do our COI checks catch external payments, consulting, and equity - or just what people remember to disclose?
  • If one core paper we cite is retracted next week, what's our contingency?

Level up your AI hygiene

If your group is updating skills for AI-aware research workflows, consider curated training by job role. It's faster than piecing together random tutorials and helps set shared standards across your team.

Explore AI courses by job role

The takeaway: Don't outsource trust to tools or reputations. Make verification a habit, disclose money clearly, treat AI as both a helper and a threat, and keep a short feedback loop between claims and evidence. That's how you protect your work - and your readers.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)