Can an AI reviewer make peer review less painful? q.e.d maps claims, flags gaps, and makes you think

Peer review drags; q.e.d offers a fast, structured pre-review that builds a claim tree, flags gaps, and suggests fixes. Scientists say it speeds drafts without replacing judgment.

Categorized in: AI News Science and Research
Published on: Nov 19, 2025
Can an AI reviewer make peer review less painful? q.e.d maps claims, flags gaps, and makes you think

Can AI Make Peer Review Less Painful? Inside q.e.d's Bid to Help Scientists Think Clearer

For Oded Rechavi of Tel Aviv University, academia's upside is obvious: you get to chase curiosity. The friction shows up at publication, where peer review can feel like the fun police. "The problem is that the fun ends when you need to publish it and go through the [review] process," he said. He values feedback, but like many biologists, he sees the current setup as far from ideal.

The pace is slow. Reviewer bandwidth is limited, conflicts and biases exist, and authors feel pressure to please preferences that may dilute the science. That culture-paired with "publish or perish"-burns time and energy better spent on experiments and ideas. Groups like COPE outline best practices, but the day-to-day grind still drags for working scientists. COPE's peer review guidance

q.e.d: An AI Pre-Reviewer Built by Scientists, for Scientists

Two years ago, Rechavi asked a simple question: can AI help researchers get to sharper papers, faster? He and a cross-disciplinary team built q.e.d-named after "quod erat demonstrandum." It isn't a peer; it's a pre-review tool to stress-test a manuscript before submission.

Upload a draft, wait ~30 minutes, and you get a report with a "claim tree." The system extracts claims, checks the logic between them, highlights strengths and gaps, and suggests experimental and textual edits. It's a structured way to see what your paper is actually saying-and what it can't yet support.

Early Signals from the Bench

In August 2025, the team invited outside researchers to test the system. MichaΕ‚ Turek from the Polish Academy of Sciences expected the usual LLM pitfalls like hallucinated citations, but said q.e.d "gave pretty accurate suggestions on what you should do to support your claim." He also found its positioning of work against current knowledge useful-novel vs. known-something general LLMs still fumble.

Since launching in October 2025, researchers from more than 1,000 institutions have tried q.e.d. The interest makes sense: it promises fast, structured critique, without the months-long wait.

What Working Scientists Say

Maria Elena de Bellard at California State University, Northridge, saw q.e.d on X and ran two manuscripts through it. "ChatGPT will think for me, but q.e.d makes me think," she said. It flagged a section as previously done; she cross-checked with Consensus and adjusted her plan. Now she uses it to test whether proposed experiments truly answer her questions, compress feedback cycles, and refine grants.

As a non-native English speaker, she's also dealt with reviews fixating on language over content. q.e.d, she said, "just focuses on the science, and that is how scientific review should be." That shift alone can change how early drafts evolve.

Mark Hanson at the University of Exeter ran a small test: he uploaded a published paper to q.e.d and to another agent, Refine. He wasn't impressed with Refine, but said q.e.d "is doing something quite interesting in the power of how it is able to digest information," even surfacing a plausible genetic rescue experiment. Still, his verdict was measured: the tool is "an average critical thinker." Useful for spotting generic gaps quickly, but not a source of original insight-and only as good as its training data.

How to Use q.e.d Without Outsourcing Your Judgment

  • Run it early, not just pre-submission. Treat the claim tree as a hypothesis map to align figures, logic, and conclusions.
  • Triage the gaps it flags: tighten analyses you can fix now; list experiments that would move the needle; cut claims you can't support.
  • Triangulate novelty calls. Validate with domain searches in PubMed and your own literature graph.
  • Document changes driven by its feedback. This helps in grant resubmissions and, if you go public with reviews, shows a clear audit trail.
  • Keep authorship standards intact. Don't accept suggested experiments blindly; debate them with co-authors and evaluate feasibility, ethics, and cost.
  • For students and new researchers: study the "gold-standard" experiments it proposes as templates-not scripts.

What's Next

q.e.d is still young. Rechavi's team is adding link uploads, grant review tools, and more. He also wants users to share their q.e.d reports to signal transparency and a willingness to improve.

On November 6, 2025, q.e.d announced a collaboration with openRxiv. Researchers can run q.e.d before posting preprints, and the system will track how manuscripts evolve based on its feedback. If that sticks, it could normalize visible pre-review and set clearer expectations for what "ready" looks like.

Turek hopes q.e.d will expand to fact-check claims for broader audiences. His advice is pragmatic: "Each scientist should at least try it and see whether it is [a good tool] for him or her."

Bottom Line

Peer review is essential but stretched. A structured pre-review like q.e.d can compress feedback cycles, surface missing controls, and nudge cleaner logic. It won't replace deep expertise, but it can free your best thinking to focus on what matters: proof.

Level Up Your AI Skills for Research

If you're building an AI toolkit to speed literature reviews, analysis, and writing, explore curated options here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)