Business students see ChatGPT as useful but worry about fairness and academic integrity, study finds

UK business students use ChatGPT routinely for drafts, summaries, and feedback-but unclear rules leave many anxious about crossing lines they can't see. Researchers say universities need specific guidance, not broad warnings.

Categorized in: AI News Education
Published on: Mar 18, 2026
Business students see ChatGPT as useful but worry about fairness and academic integrity, study finds

Business Schools Grapple With ChatGPT: Students Weigh Speed, Fairness, and Academic Honesty

A qualitative study of recent management graduates in the United Kingdom reveals that students are using generative AI tools like ChatGPT routinely in their coursework-and their concerns are far more nuanced than simple cheating or enthusiasm.

Researchers interviewed 15 final-year business students about their actual use of ChatGPT and found three overlapping tensions shaping how students approach these tools: immediacy, equity, and integrity.

Speed and Reassurance Drive Adoption

Students described ChatGPT as a permanent fixture in their study toolkit, alongside search engines and lecture recordings. They used it to summarize articles, generate examples, explain complex theories, and plan assignments.

The appeal wasn't purely about usefulness. Students valued the instant availability and lack of judgment. Unlike professor office hours or email, ChatGPT responds immediately and without evaluation. Some used it to check their understanding before writing in their own words, or to get unstuck staring at a blank page.

One student described it as having "a private tutor who never sleeps." But that convenience raised a harder question: if AI can always rescue you at the last minute, are you really learning?

Access Creates New Inequality

Students who paid for premium versions felt they received more accurate, detailed support than peers using free tools. Some saw this as another form of educational inequality.

Others worried that assessment success might increasingly depend on whether students could afford better algorithms-and whether they had the skills to prompt the system effectively. Being young doesn't automatically make someone digitally fluent.

At the same time, some students viewed AI as a leveler. Those with dyslexia, ADHD, or other conditions said ChatGPT helped with planning and time management. International students said it improved their academic English. For them, AI felt less like cheating and more like a reasonable adjustment.

Rules Are Vague, Creating Anxiety

All students knew that copying ChatGPT output directly into assignments would be plagiarism. But beyond that, rules felt murky.

Was it acceptable to ask ChatGPT for feedback on a draft? To suggest headings? To generate argument lists that students then verified using original sources? Different courses and lecturers gave different answers, leaving students uncertain about what counts as legitimate help versus misconduct.

This ambiguity made some anxious about being accused of cheating even when acting honestly. Group work added risk: one teammate's heavy reliance on AI could trigger plagiarism detection software, potentially implicating the entire group.

Employers May Discount AI-Era Graduates

Students worried that future employers would dismiss their work as "AI-generated," devaluing years of effort. Even those who used ChatGPT sparingly feared their entire cohort might be seen as "AI-made."

Current evidence offers mixed signals. Hiring managers are increasingly skeptical of graduates' written applications but simultaneously seek candidates with AI skills. Employers have already begun prioritizing skills verification over credentials alone.

What Universities Need to Do

The research suggests institutions should move beyond simple bans or endorsements. Students are already integrating these tools into their work. The question is whether universities will help them do so transparently, fairly, and with integrity intact.

Clarify rules with concrete examples. Rather than broad warnings about ChatGPT misuse, students need discipline-specific guidance on what's allowed and why. This includes acknowledging legitimate uses like accessibility support or language assistance.

Assess process alongside product. Students could explain how they used AI, reflect on its limitations, and show verification steps. This makes AI use visible and accountable-similar to citing sources in a footnote-rather than something to hide.

Address equity directly. If some students can access far more powerful tools, that affects fairness. Universities could provide standardized AI tools to all students, teach critical use, or redesign assessments so success doesn't depend on premium access.

The OECD has called for education stakeholders to encourage "inclusive, trustworthy and meaningful uses of GenAI in education" aligned with educational goals.

Students Are Thinking Carefully

The students in this study were not reckless rule-breakers or naive digital natives. They thought critically about AI's benefits and risks, and wanted to protect the value of their degrees.

If universities ignore this perspective, they risk signaling that integrity is only about catching cheats. If they instead engage with students' real experiences of speed, fairness, and honesty, AI could become an opportunity to rethink what meaningful learning and fair assessment look like-rather than a threat that quietly undermines them.

For educators, this means treating ChatGPT and similar tools as part of the curriculum itself. Learn more about AI in education to develop policies that reflect how students actually work.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)