Scientists Split Over Ethics and Limits of AI Use in Academic Research

A survey of 5,000+ scientists shows mixed views on AI in research: 90% accept AI for editing, but most oppose AI writing key paper sections. Usage remains limited, with ethics and transparency key concerns.

Categorized in: AI News Science and Research
Published on: May 21, 2025
Scientists Split Over Ethics and Limits of AI Use in Academic Research

What Scientists Really Think About Using AI in Research

A recent survey by the scientific journal Nature gathered insights from over 5,000 scientists worldwide on the ethics of using artificial intelligence in academic publishing. The results revealed clear divisions in opinion about where and how AI should be applied in research workflows.

Where Is the Line of Acceptability?

Most scientists (90%) agree that using AI for editing and translating their own work is appropriate. However, opinions diverge on transparency: 35% think it's acceptable not to disclose AI use or share prompts, while 65% believe disclosure is necessary.

When it comes to generating article content, 65% of respondents approve AI-generated text, but about a third strongly oppose it. The highest acceptance is for AI drafting abstracts—23% see it as fully appropriate, and 45% accept it if AI use is disclosed.

Yet, caution prevails for other sections. A majority feel AI should not write the Methods (56%), Results (66%), or Discussion/Conclusion (61%) parts of papers. The peer review process is also sensitive; over 60% consider AI use unacceptable here, mainly due to concerns over confidentiality and accountability.

Practice Lags Behind Discussion

Despite AI tools being widely accessible, their actual use in research remains limited:

  • 28% have used AI for editing papers
  • Only 8% have employed AI for first drafts, summarizing literature, translating papers, or assisting in peer review
  • 65% have not used AI for any scientific tasks

Younger researchers tend to be more open to AI, and those from non-English-speaking countries use AI more frequently, mainly to overcome language barriers.

Ethics, Responsibility, and Quality

Scientists recognize both advantages and drawbacks of AI. It speeds up routine tasks and helps synthesize information efficiently. On the downside, AI often produces inaccuracies, fabricated references, or what some call "well-formulated nonsense."

Many advocate for moderate use. For instance, a humanities researcher from Spain shared using AI for translation but firmly rejects AI involvement in writing or peer review.

Publisher Policies

Policies on AI use vary across publishers. Most require disclosure when AI is used for text generation but allow undisclosed use for proofreading and editing.

  • JAMA mandates specifying the exact AI tool and how it was used.
  • IOP Publishing removed mandatory disclosure but recommends transparency.
  • Peer review AI use is mostly prohibited by major publishers.
  • Elsevier and the American Association for the Advancement of Science (AAAS) explicitly forbid reviewers from using generative AI.
  • Springer Nature and Wiley allow limited AI use with mandatory disclosure but prohibit uploading confidential material to AI platforms.

For researchers considering AI tools, understanding these policies is critical to ensure compliance and maintain research integrity.

For those interested in responsible AI use in research or seeking training on AI tools, resources like Complete AI Training’s latest AI courses offer practical guidance tailored to scientific professionals.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide