AI, Integrity, and Open Science: Lessons for Institutions from KRAF 2025

AI is now baked into research, from search to drafts, raising speed-and stakes. Clear rules, human oversight, and open data can lift quality, trust, and reproducibility.

Categorized in: AI News Science and Research
Published on: Feb 25, 2026
AI, Integrity, and Open Science: Lessons for Institutions from KRAF 2025

AI in research and publishing: What institutions need to know from KRAF 2025

AI now sits inside core research workflows-searching literature, structuring arguments, drafting text, even pre-screening submissions. That raises the ceiling on speed and support, while putting integrity, policy, and training squarely on the table. With clear rules and the right tools, institutions can turn this into an advantage for discovery and trust.

Institutional integrity: set the rules, keep humans in charge

Publishing leaders at KRAF 2025 were clear: AI is an assistant; humans make the decisions. The aim isn't to limit useful tools but to detect misuse, reinforce trust, and keep accountability with authors and reviewers. Expect more screening for fabricated content and low-quality work-and better workflows because of it.

For institutions, this is a leadership moment. Provide practical guidance on when and how AI can be used in writing, analysis, and review. Align authorship and disclosure expectations with publisher policies so researchers aren't guessing, and your editors aren't firefighting.

What editors are seeing-and what you can do

Editors report more AI-supported submissions, often built on open datasets. Quality varies, and superficial work slips in when teams treat AI output as finished research. That's preventable.

Tighten internal reviews. Train researchers to verify sources, document AI use, and test claims. Where possible, adopt screening tools (plagiarism, image forensics, data-availability checks) before submission to reduce rework later.

What researchers need right now

Surveys shared at KRAF showed that most researchers who tried AI found it genuinely helpful-for search, coding, and early draft structure. Even those not using it yet expect to, soon.

They're also alert to risks: bias baked into models and datasets, accuracy gaps, and environmental costs. The fix is context. Teach teams to ground AI outputs in domain knowledge, verify data and code, and document methods so others can reproduce results.

"Researchers find AI useful, but integrity and sustainability must lead." - Laura Schmid, Nature Communications

Design for context, not shortcuts

As highlighted at the forum, scientific nuance matters. Biological and environmental variability can change conclusions; AI tools should reflect that complexity instead of flattening it. Encourage researchers to use models and prompts that account for study design, population differences, and data limits-then report those choices transparently.

Open, interactive research is the direction

The future points to papers you can interrogate-filter results, explore scenarios, inspect data and code. AI can help make that feasible at scale, but integrity leads the way.

Institutions can speed this shift by supporting open data, reproducible analysis, and transparent methods in everyday workflows. That's how you increase trust while making findings more useful.

Build trust with clear, shared standards

Researchers, publishers, and institutions share the same goal: credible science. Anchor your AI plans to fairness, accountability, and transparency. For reference, see community guidance like COPE's position on AI and authorship and the FAIR data principles.

What to implement now

  • Publish clear AI policies: Cover acceptable AI use, authorship, disclosure, peer review, and data governance. Make examples concrete.
  • Run practical training: Teach prompt strategy, verification, bias checks, and transparent reporting. Focus on repeatable workflows.
  • Set ethical guardrails: Define quality standards, bias mitigation steps, and reproducibility expectations for AI-assisted work.
  • Require disclosure: Document where AI assisted writing, analysis, or decisions. Include model names, versions, and parameters where relevant.
  • Collaborate with publishers: Stay current on integrity risks, evolving policies, and editorial expectations.
  • Use screening outputs internally: Feed plagiarism, image, and data-availability reports into your lab or departmental review to catch issues early.
  • Champion context-aware tools: Prioritize AI that reflects study design, domain constraints, and real-world variability.
  • Make openness routine: Embed data, code, and methods sharing into standard project checklists and grant requirements.

If your teams want hands-on resources and workflows, explore AI for Science & Research for practical training aligned to academic use cases.

About the Korean Research Advisory Forum (KRAF)

KRAF was launched to bring institutional leaders and researchers together to share priorities and shape next steps for research. The members are listed below (alphabetical):

  • Changmo Sung, Director, Mission PM Center, Korea ARPA-H
  • Chulhong Kim, Professor, Pohang University of Science and Technology
  • Heisook Lee, President, GISTeR
  • Je Kyung Seong, Professor, Seoul National University
  • Jooyoung Park, Associate Professor, Seoul National University
  • Mijin Yun, Professor, Yonsei University College of Medicine
  • Sang Yup Lee, Professor, Korea Advanced Institute of Science and Technology
  • Sun Huh, Professor, Hallym University
  • William Jo, Professor, Ewha Womans University
  • Woojung Jang, CEO, AI Star
  • Wooyoung Shim, Professor, Yonsei University

The signal is clear: AI can help research move faster and with more clarity-if institutions lead with standards, training, and transparency. Set the guardrails, upgrade workflows, and your researchers will do the rest.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)