Most Researchers Using AI in Papers Don't Disclose It, Study Finds
Researchers at Peking University analyzed over one million published papers and found that most articles with suspected AI usage failed to disclose it-even in journals with explicit policies requiring disclosure. The study, published in the Proceedings of the National Academy of Sciences, reveals a gap between journal guidelines and researcher behavior.
The researchers categorized more than 5,000 journals based on their AI policies. Most journals permitted AI for writing and editing support, with over 60 percent allowing its use for language and grammar assistance. Yet despite these policies, disclosure remained rare.
The Disclosure Problem
Among papers with suspected AI content, the majority did not disclose this use. This held true regardless of whether the journal had an AI policy in place or not.
The disclosure rate has risen slightly-from approximately 0.1 percent in 2023 to 0.43 percent in early 2025. The researchers attributed this increase to policies helping foster transparency, though they acknowledged researchers may hesitate to disclose AI use due to concerns about how it affects perception of their work.
AI Use Correlates With Language Background
Papers with authors from non-English speaking countries showed greater amounts of probable AI content compared to those from English-speaking countries. This pattern suggests researchers may be using AI to lower translation barriers, though the detection method cannot definitively distinguish between language polishing and outright text generation.
What Experts Say
A data scientist at the University of Louisville praised the researchers' multipronged verification approach but questioned whether the method of categorizing different AI uses was sound, given the potential overlap between categories.
He also disagreed with the researchers' characterization of nondisclosure as a "cautious approach," comparing it instead to failing to disclose conflicts of interest. "I would describe it as a lie," he said.
The expert called for clearer journal guidelines that eliminate ambiguity-such as defining what counts as "light editing"-and faster retraction of papers with improper AI use.
Next Steps
The researchers said publishers should shift toward promoting responsible AI integration with better detection infrastructure. They are developing improved models to detect and classify AI use in research and plan to complement their quantitative findings with surveys of how scientists actually use these tools.
For professionals working in research, understanding your journal's AI policy and disclosure requirements is now essential. Consider exploring Generative AI and LLM Courses to understand how these tools work and their proper use in academic contexts, or AI Research Courses focused specifically on scholarly applications.
Your membership also unlocks: