Israeli AI Tool Takes On Peer Review's Bottleneck
Peer review-the process where experts evaluate manuscripts before publication-remains essential to scientific credibility. It also remains broken. Researchers routinely wait months for feedback. The process favors established researchers over newcomers. And it absorbs enormous amounts of expert time across the scientific community.
Now an Israeli startup is offering a practical response: an AI tool that gives authors critical feedback before formal peer review begins.
The Problem With Peer Review
Without peer review, published research would lack credibility. Journals would publish unchecked findings alongside unreliable claims, making it impossible for readers to distinguish genuine discoveries from speculation. The process replaced earlier systems where editors alone decided what to publish-a model prone to bias and conflicts of interest.
But peer review's flaws are real. Reviewers are unpaid volunteers. Decisions depend partly on which experts a paper happens to reach. The timeline stretches across seasons. Many researchers have waited a year or longer for final acceptance.
Preprint platforms like arXiv offered a partial solution, letting researchers share work before formal review. That model has limits. arXiv recently tightened rules for computer science papers after an influx of low-quality submissions, many generated by large language models.
Meet q.e.d Science
Oded Rechavi, a neuroscientist at Tel Aviv University, founded q.e.d Science with colleagues after spending 18 months developing the platform with programmers, scientists, and AI experts. Rechavi is known for critiquing academic culture on social media-he has over 145,000 followers on X-and for research that crosses traditional disciplinary boundaries.
The platform uses a large language model trained on peer reviews and published papers, along with expert annotations marking gaps in research. Researchers submit a draft manuscript and receive structured feedback within minutes. The system identifies main claims, points out logical gaps, highlights strengths and weaknesses, and suggests revisions.
The goal is not to replace peer review. Instead, it lets authors strengthen their work before submission, potentially reducing reviewer criticism and speeding publication.
AI as a Tool, Not a Judge
The distinction matters. Authors using AI to self-check their own work is fundamentally different from reviewers uploading confidential manuscripts to generic AI tools. Major publishers including Nature Portfolio, Elsevier, and Springer Nature now warn reviewers against uploading unpublished work to standard AI systems.
Rechavi said the q.e.d team includes scientists working to build "a focused, reasoned force multiplier for human critical thinking." He added: "We neither want nor aim to replace" human judgment.
Real risks remain. AI systems can struggle to recognize genuinely novel research. They inherit biases from training data. They may generate plausible-sounding but incorrect feedback. Over-reliance on AI tools could weaken human critical judgment-a concern that extends far beyond academia.
Rechavi said he has "no intention of taking humans out of the loop. The final decision should remain in the hands of human beings, for human beings."
Early Adoption and Unknowns
The platform launched after a limited pilot with Rechavi's colleagues. Since then, bioRxiv and openRxiv announced a pilot allowing authors to submit preprints to q.e.d for AI-generated assessment. The tool grew from life sciences research, and most early testing comes from that field.
Rechavi hopes to expand it across natural sciences, engineering, and eventually social sciences and humanities where scientific validity can be tested.
The real impact remains unknown. Researchers will need to evaluate performance across disciplines and manuscript types. They will need to ask whether feedback is useful, fair, secure, and whether it improves work without weakening human judgment.
In an era when AI can generate scientific-sounding text with ease, tools that claim to improve review will themselves need careful scrutiny.
What Comes Next
Rechavi said his goals are straightforward: "Identify and publish, as quickly as possible, the best, most original, and most valid science above the endless noise. And for science to be an enjoyable profession, and for talented people to choose it as a way of life."
q.e.d and similar tools may not solve peer review's fundamental problems. They should not replace human reviewers and editors. But if they help researchers identify weaknesses earlier and reduce friction in publishing, they could help important findings reach readers faster.
For professionals working in research and science, understanding how AI fits into these processes matters. Consider exploring Generative AI and LLM Courses to understand how large language models work in specialized applications, or AI Research Courses for deeper context on AI's role in scientific work.
Your membership also unlocks: