Weekend reads: Court tosses out challenge to ORI funding ban; prof steps down after AI citation 'scandal'; senator seeks journal's COVID-19 manuscripts
This is the final Weekend Reads of 2025. A year-end wrap-up is coming next week, but the signal from this week is clear: Research integrity and AI policy will demand sharper practices in 2026. If you value sustained reporting, curated links, and a comprehensive database of retractions, consider supporting this work with a tax-deductible donation. Every dollar helps.
The Center for Scientific Integrity continues to support multiple efforts: the Retraction Watch Database, the Medical Evidence Project, the Hijacked Journal Checker, the Elisabeth Bik Science Integrity Fund, and the Sleuths in Residence Program.
This week's highlights
- A professor in India added coauthors who "kindly covered" the publication fee and removed others.
- A court rejected a researcher's challenge to an Office of Research Integrity funding ban. For context on ORI's role, see ORI.
- The Hijacked Journal Checker reached 400 entries, a reminder to verify journals before submitting or citing.
Elsewhere in research and publishing
- Professor steps down from an associate deanship after an AI-generated references scandal.
- A U.S. senator asked Science for coronavirus manuscripts and emails, citing concerns about "dangerous research."
- Reports continue that AI is inventing academic papers that don't exist-yet they're getting cited.
- A new National Science Foundation initiative shifts more support to team-based research. More on NSF grants and programs: NSF.
- Mass abstract submissions are crowding conferences and diluting quality, say academic medicine researchers.
- CDC awarded $1.6 million for a hepatitis B vaccine study led by Danish researchers who have sparked controversy.
- Experts behind a puberty blockers trial respond to growing opposition: "This is why the trial is necessary."
- Conference organizers are testing AI authors and AI reviewers.
- Negative data is a valid observation about how the world works.
- Published peer review reports tend to contain more useful information than unpublished reports.
- Organizations are central to research ethics development, per the Finnish National Board on Research Integrity (English summary available).
- Publishers are rethinking their value in an AI-heavy publishing environment.
- Most researchers would gain more recognition if assessed by article-level metrics rather than journal-level metrics.
- Research evaluation systems are too slow to keep up with AI-accelerated work.
- To detect inconsistencies and fraud, authors should share data underlying summary statistics as routine practice.
- Online research faces growing threats from fraud; new lessons and recommendations are emerging.
- Sophisticated bots are contaminating surveys, behavioral games, and related study formats.
- Editorial boards in global health remain insufficiently diverse.
- Journal AI policies are struggling to slow the surge of AI-assisted writing.
- AI-written peer reviews can slip past detection tools.
- From The BMJ's Christmas issue: How recent is "recent"? A look at suspiciously timeless citations.
Why this matters for your lab, team, or journal club
- Update authorship policies: Spell out who qualifies as an author, how fees are handled, and how changes are documented.
- Set AI rules in writing: Define acceptable AI use for drafting, editing, literature discovery, and peer review. Require transparency in disclosures.
- Audit citations: Check references for existence, accuracy, and appropriateness. AI hallucinations are common-verify DOIs and URLs.
- Prefer article-level assessment: Track citation contexts, data reuse, and code adoption. Don't lean on journal prestige as a proxy for quality.
- Demand data behind summaries: Ask for the underlying data used to compute summary stats in submissions, reviews, and collaborations.
- Protect human-subjects research: Deploy bot checks (e.g., honeypots, response-time flags), and validate samples with attention and consistency tests.
- Prepare for team grants: Build cross-disciplinary proposals and governance up front: authorship order, data sharing, and conflict resolution.
- Value negative findings: Encourage preregistration and registered reports to boost acceptance of null or negative outcomes.
- Watch for hijacked journals: Verify journal domains, editorial contacts, indexing status, and APC invoices.
- Increase peer review transparency: Where possible, publish reviews and decision letters to improve informativeness and trust.
Practical tip on AI use
If your team relies on AI for drafting or literature triage, train people to prompt for citations plus verification steps, and require a manual check before submission. For structured prompting strategies, see this practical collection: Prompt Engineering.
Retractions and corrections to note
- A 2017 Science Signaling paper by neuroscientist Sylvain Lesne has been retracted; the journal issued an expression of concern in 2022.
Upcoming talks
- Maintaining Integrity in Peer-Reviewed Publications - Jefferson Anesthesia Conference 2026, featuring Adam Marcus (February 2, Big Sky, Montana)
- Scientific Integrity Challenged by New Editorial Practices, featuring Ivan Oransky (February 12, virtual)
How to support and get in touch
If you value independent coverage of research integrity, consider a tax-deductible donation. If you spot a retraction that's missing from the database or have feedback, email: team@retractionwatch.com.
Your membership also unlocks: