Caught Between AI and Authenticity: How Detection Tools Are Changing Writing for Students and Journalists

Students and journalists face challenges as AI use in writing grows, with detection tools often flawed and lecturers skeptical of perfect work. Experts urge human review and fair AI policies.

Categorized in: AI News Writers
Published on: Aug 25, 2025
Caught Between AI and Authenticity: How Detection Tools Are Changing Writing for Students and Journalists

The Impact of AI on Writing and Detection

Dennis Anthony, a 400-level Mass Communication student at Kaduna State University, faced punishment after his lecturer discovered he used Artificial Intelligence for his assignment. Although Dennis admitted to relying on AI to meet his deadline, his lecturer recognized the difference easily. According to the lecturer, “AI speaks a different English” compared to typical student writing.

Dennis shared a common concern among students: “Our lecturers just assume no one can write a perfect piece without using AI. They use AI themselves, and that’s a big threat to our writing skills.”

Similarly, Alkasim Isa, a journalist in Kano State, ran into trouble when his editor rejected his article, suspecting it was AI-generated due to its overly polished and uniform language. As AI-generated content becomes more widespread—from essays to news stories—people now use AI-based tools to detect AI writing. Tools like GPTZero, Turnitin, and Copyleaks have become standard for distinguishing human-written from machine-generated content. Ironically, these detection tools are themselves powered by AI.

Writers report avoiding vivid or structured phrases that once enriched their writing because such patterns increasingly trigger AI detectors. Editors rely on these tools to verify article originality, but transparency is low and accuracy varies depending on the language style and length of the text.

Popular AI detection tools include Copyleaks, Originality.AI, GPTZero, Turnitin, Winston AI, Sapling, and AI Detector Pro. For example, Copyleaks claims over 99% precision with a low false positive rate, detecting content from major AI models like ChatGPT, Gemini, and Claude. GPTZero provides probability scores, flagging sections as AI, human, or mixed.

Yet, Jibril Haruna, Lead of AI Engineering at Seismic Consulting Group, warns these tools are opaque, biased, and flawed. They function as classifiers trained on datasets of human and AI texts to spot patterns in word choice and linguistic variation. Haruna criticizes their lack of transparency, as they often produce percentage scores without revealing methodology or accuracy rates. This especially affects non-native English speakers whose writing may differ from the training data, punishing vulnerable writers and students.

Moreover, these detectors cannot reliably distinguish between fully AI-generated essays and AI-assisted work like grammar checks or brainstorming.

Journalists Struggle to Compete with AI Writers

Alkasim Isa decided to be transparent with his editor after admitting he used AI to structure his article. However, this transparency led to increased skepticism; his editor began suspecting all his submissions of being AI-generated, regardless of content.

Freelance journalist Sani Modibbo uses AI for headlines but writes article bodies himself. Despite this, an editor once suspected AI involvement purely based on writing style. “This has made me wary of using such tools,” Modibbo said.

Sunday Michael Ugwu, Editor of Pinnacle Daily, explains editors detect AI content by spotting sudden shifts in writing style and quality. He cautions that publishing AI-fabricated stories can damage a reporter’s career. “Editors rely on experience, look for overly mechanical writing, and independently verify facts,” he said.

Ugwu stresses AI isn’t inherently bad but must never replace creativity or storytelling. He also notes some AI detection tools are already being outsmarted by AI products that humanize their content.

Lecturers Are Using AI Too

A student at Northeastern University in the U.S. demanded her tuition back after discovering a professor used ChatGPT to prepare course materials—while forbidding its use by students. This contradiction is echoed in Nigeria, where lecturers often accuse students of AI use when their work is simply well-written.

Bashira Shu’aibu, a final-year Mass Communication student, says some lecturers dismiss her hard work as “too perfect.” “But it is what they taught us,” she insists.

Another 300-level student admitted to mixing AI with their own creativity but still got caught and penalized. Farida Ahmed Bala warned against AI use for final projects, citing plagiarism risks. “If AI does everything for us, why are we in school?” she questioned.

Conversely, some students learn to use AI without detection by combining AI checks with manual editing and proper citation.

Dr. Ismail Muhammad Anchau, Chief Lecturer at Kaduna Polytechnic, acknowledges rising AI use among students, calling it both a development and a threat. He believes skilled lecturers can detect AI use through close reading and tests rather than relying on detection software.

Dr. Babayo Sule from the National University of Lesotho worries AI erodes originality and talent. His university uses detection tools, dismissing work with high AI percentages. “When you see mistakes, you know the work is original. Too clean? That’s AI,” he explains.

Does Humanising AI-Generated Content Solve the Issue?

Writers often use AI humanizers to make machine texts look natural, believing this will bypass detection. However, tests show these tools are not foolproof.

For example, a journalist’s career summary generated by ChatGPT was flagged by GPTZero as 92.25% AI-generated. Even after humanizing with tools like Undetectable AI and Humanize.AI, the text still triggered AI detection or produced mixed results.

Ibrahim Zubairu, technical product manager and founder of Malamiromba, explains detection tools fail because they rely on patterns from biased training data. “They assume fixed ideas of human writing, but writing changes,” he says. His own tests confirmed that heavily edited AI content can fool leading detectors like GPTZero and Copyleaks.

Scholars Can Figure Out AI Text

AI educator Grema Alhaji Yahaya argues that scholars often don’t need AI detection software. Linguistic and stylistic clues give away machine writing:

  • Perfect punctuation combined with unusual overuse of em dashes or semicolons
  • Repetitive, overly formal language that feels like an academic thesaurus
  • Absence of natural quirks found in human writing

“If you pay attention, the patterns speak for themselves,” Yahaya says.

Going Forward

AI researcher Dr. Najeeb G. Abdulhamid warns AI detection tools are far from reliable. OpenAI shut down its text classifier for low accuracy, and Turnitin cautions against trusting low-percentage AI scores fully. “Detector outputs are weak signals, not proof,” he says.

He stresses the need for human review, corroboration, and fair policies banning sole reliance on AI detection. Clear appeals processes and transparency are essential to avoid false positives that unfairly penalize students and writers.

Abdulhamid recommends policies aligned with UNESCO standards on AI ethics, including impact assessments, explainability, audits, and governance boards with student representation.

For writers looking to expand their skills in AI tools and content creation, exploring AI courses for writers can provide valuable insights and practical strategies to work alongside AI effectively.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)