Letting AI code your data: where productivity ends and integrity begins

Where does AI help end and authorship begin

Categorized in: AI News Science and Research
Published on: Nov 12, 2025
Letting AI code your data: where productivity ends and integrity begins

Ethical AI in Qualitative Research: Dr. Michael Gizzi leads an evidence-first study

Dr. Michael Gizzi, professor of criminal justice sciences and 2025-26 CAST Research Fellow, is building a practical ethics playbook for AI-assisted qualitative research. His project, "The Ethics of Artificial Intelligence and Qualitative Research," looks at where machine help ends and scholarly authorship begins-and how to keep research integrity intact while teams adopt new tools.

After two years testing systems like ChatGPT, Co-Pilot, NVivo, and MAXQDA's AI-Assist, Gizzi brings both practitioner and method expertise to the table. As a certified MAXQDA trainer, he's seen how fast these tools can auto-code, generate subcodes, and summarize complex datasets-benefits that also raise hard questions about accountability and authenticity.

Why this study matters

The core issue is ownership and attribution. If an AI system produces your themes, literature outlines, or memos, where is the researcher's judgment-and what's left of the scholarly contribution?

Gizzi puts it simply: at what point does an AI-generated summary stop being your research? For faculty and grad students, the answer affects peer review, mentorship, and how we teach critical thinking.

What the team is testing

Gizzi is collaborating with Dr. Stefan RΓ€diker, qualitative methodologist and lead developer of MAXQDA (Germany). They compare human-coded datasets with AI-assisted analysis across clear, testable criteria.

  • Precision and dependability of codes and themes
  • Bias patterns introduced or amplified by AI
  • Consistency across repeated runs and model updates
  • Data privacy risks in tool workflows and logs
  • "Data hallucinations"-confident but inaccurate insights

The goal isn't to ban the tools; it's to define ethical boundaries that protect scholarly rigor while using AI where it actually helps.

Tools under review

ChatGPT, Co-Pilot, NVivo, and MAXQDA's AI-Assist are being evaluated for coding support, summarization quality, and how they influence a researcher's interpretive decisions. Speed is easy to measure; the hard part is tracing which insights come from the researcher and which come from the model.

Phase two: a cross-disciplinary ethics framework

The second phase expands beyond tool tests. Through a qualitative content analysis of recent work in the social sciences and STEM, the project distills the most pressing ethical issues and proposes best practices for responsible AI use in academia.

Instead of a quantitative meta-analysis, the team applies thematic synthesis to surface methodological, ethical, and practical guidance that research groups can actually use.

Practical questions every research team should answer

  • Disclosure: Where and how will you state AI assistance in methods and authorship notes?
  • Attribution: Which outputs (codes, themes, memos, lit summaries) must remain human-generated?
  • Validation: How will you audit AI-assisted codes against human coders?
  • Reproducibility: Can another team reproduce your AI steps given model/version changes?
  • Privacy: What data leaves your environment, and under what agreements?
  • Training: How will students learn critical analysis without over-relying on AI shortcuts?

Institutional impact

This work supports Illinois State University's focus on scholarly excellence and interdisciplinary research within the College of Applied Science and Technology. It also strengthens the Department of Criminal Justice Sciences' commitment to methodological diversity, academic rigor, and policy relevance.

Expected outputs include journal articles and a book collaboration to guide scholars on responsible AI use in qualitative research.

Bottom line

AI isn't going away. Gizzi's aim is to protect core principles-intellectual honesty, accountability, and research integrity-while making smart use of new tools. Productivity matters, but not at the expense of authorship and truthfulness.

Related resources

If your lab is updating skills for responsible AI use, you can browse current options here: Latest AI courses.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)