UC Davis Students Build ResearchQuest.ai to Cut Literature Reviews from Months to Minutes
UC Davis students built ResearchQuest.ai to shrink literature reviews from months to minutes. It finds papers, summarizes sections, compares methods, and cites sources.

Student-Developed AI Tool Fast-Tracks Literature Review
Two computer science researchers at UC Davis built an AI tool that compresses literature review time from months to minutes. ResearchQuest.ai helps engineers and scientists cut through the reading backlog so they can move to implementation faster.
Built by Researchers for Researchers
ResearchQuest.ai was created by Akash Bonagiri (Ph.D. student) and Gerard Anderias (undergraduate), paired through the UC Davis College of Engineering E-SEARCH program. Their starting point was a common bottleneck: curating, reading, and synthesizing hundreds of papers before a project can begin.
"It generally takes me several months to a year to curate a lot of papers that are relevant to my research and to really get an understanding of what is happening in the field," Bonagiri said. That pain point became the project.
What ResearchQuest.ai Does
- Acts as an end-to-end agent: given a research query, it manages the full process from retrieval to synthesis with minimal input.
- Finds and compiles academic papers relevant to the query.
- Identifies important sections within each paper and generates clear summaries.
- Builds a comparison table across papers for a fast, high-level view of methods, results, and alignment with your focus.
- Enables chat-based interaction, so you can ask targeted questions (e.g., "What's the key information from the abstract?") and get sourced answers.
- Cites sources for every answer to reduce hallucinations and make verification easy.
- Includes citation generation and export features (integration in progress).
Early versions pulled from OpenReview. The team is expanding to Google Scholar and other hosts of academic papers to widen coverage.
How It Fits Into Your Workflow
- Enter a precise research query or topic.
- Scan the curated set of papers and their summaries.
- Use the comparison table to spot the most relevant methods and findings.
- Ask follow-up questions via chat to extract details from specific sections.
- Export citations and keep a record of your session for continuity.
Availability and Roadmap
The team is preparing a public web release in October 2025. The tool will be free to use, with account sign-in so you can save and revisit work. Before launch, they are deploying to a production server and implementing traffic management.
Source expansion beyond OpenReview, broader site coverage, and richer citation management are active efforts.
Why This Matters for IT, Dev, and Research Teams
Speeding up literature review means faster prototyping, better awareness of current approaches, and fewer duplicated efforts. You spend less time collecting and sorting, and more time building and testing.
Bonagiri already uses ResearchQuest.ai in his Ph.D. work to keep up with AI and machine learning research. For teams, this can compress knowledge ramp-up and help align on the best direction early.
Accuracy and Trust
The tool cites sources alongside each answer to make verification straightforward and reduce hallucinations. As with any LLM-assisted workflow, spot-check critical papers and validate key claims before publication or deployment.
From Theory to a Usable Tool
One highlight for the team has been turning research into a product that people can use and learn from. Building in public and gathering feedback gives them a tight loop for improvements that matter to working researchers.
If you're building AI skills for research workflows and literature automation, explore structured training resources: Latest AI courses.