AI-hallucinated citations get student science fair projects disqualified

AI chatbots are inventing fake research citations, getting student projects disqualified from ISEF and slipping fabricated references into published journals. Always verify sources using a paper's DOI number before citing anything.

Categorized in: AI News Science and Research
Published on: May 05, 2026
AI-hallucinated citations get student science fair projects disqualified

Science Fair Judges Warn: Don't Trust AI for Citations

Chatbots are generating fake research papers and inventing sources that don't exist, disqualifying student projects at elite science competitions and infiltrating published research journals. Jenna DeLuca, scientific integrity officer at Society for Science, disqualified multiple projects at the 2025 and 2026 Regeneron International Science and Engineering Fair (ISEF) because students had cited papers with wrong authors, nonexistent journals, or entirely fabricated titles.

These weren't casual regional competitions. ISEF is selective enough to be called the Olympics of science fairs. Only about 1,800 young researchers from around the globe earn a spot each year after winning regional competitions.

"There were a decent amount," DeLuca said of the disqualifications. Students likely asked chatbots like ChatGPT or Gemini for references without realizing the systems would invent citations wholesale. This problem extends far beyond high school science fairs.

Fake citations are spreading through published research

Professional scientists are publishing papers with hallucinated citations at increasing rates. Two recent studies document the scope of the problem.

Researchers analyzed nearly 18,000 papers presented at six computer-science conferences in 2024 and 2025. At least 300 contained one or more hallucinated citations. A separate analysis of 4,000 randomly selected papers, book chapters, and conference proceedings from five leading publishers found at least 65 with fake citations. Based on this rate, tens of thousands of scientific works published in 2025 may contain hallucinated references.

Ben Williamson, a digital-education expert at the University of Edinburgh, calls these "ghost references" - pieces of work that don't exist but continue circulating through the literature. He caught a particularly striking example: a fake paper listing him as author that he had never written. Multiple versions existed with different co-authors, dates, and journals. One version had been cited in at least 77 other papers.

Even fact-checking can fail

DeLuca tested whether students could catch AI hallucinations by fact-checking. She created a fake citation by merging information from two real papers, then searched for it online.

Google's AI overview appeared at the top of results and described the nonexistent paper as real and correct. A student following that summary would have found no red flags. They would have had to scroll past the AI summary to discover the mistake.

DeLuca also tested what happens when a chatbot formats real citations. Even with legitimate sources, the system introduced bogus details into what had been accurate references.

How to spot a fake citation

Each journal has its own citation format. The most important element is usually the DOI number - a unique identifier that works like a fingerprint for a paper.

To verify a citation, enter the DOI into a web browser. Check that all details match: title, author names, journal name, volume, and page numbers. If anything is off, the citation may be fabricated or manipulated by AI.

The legal stakes for researchers

Citation errors may violate federal law. David B. Resnik at the National Institute of Environmental Health Sciences and Mohammad Hosseini at Northwestern University's Feinberg School of Medicine argue that hallucinated citations could qualify as research misconduct and fraud - at least for scientists receiving federal funding.

"This is a significant problem," the two concluded in their analysis published in Accountability in Research, "and one that needs more attention."

What researchers should do instead

Some researchers avoid AI entirely. Jennifer Borgioli Binis, an education historian based in Buffalo, doesn't use any generative AI due to concerns about environmental and social costs. She advocates for curiosity over quick answers when finding sources.

Others take a middle path. Mythreya Dharani, a high school senior who competed at ISEF in 2025, uses AI to suggest research directions but then follows the links to actually read the papers cited. "A chatbot is just giving you a direction," he said. "But you still have to do the real work."

Dharani argues against avoiding chatbots entirely. Instead, researchers should ask: "What are the things that we can trust them with and what are the things that we should be more cautious of?" Citations fall squarely in the second category.

DeLuca is spreading awareness through presentations and updates to ISEF guidelines on appropriate AI use. "We're trying to educate as many people as we can," she said.

Whether you're a high school student or a published researcher, you remain responsible for the accuracy of anything you produce. Judges, universities, and employers will hold you accountable - even if the AI got it wrong.

For more on using AI responsibly in research, see AI for Science & Research.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)