Wrongly accused by AI, students face interrogations and zeros as officials stall

AI detectors are tripping up NSW students-false flags, zeros, and weeks of stress. Schools need clear rules, human review, and fairer tasks that actually show learning.

Categorized in: AI News Education
Published on: Dec 22, 2025
Wrongly accused by AI, students face interrogations and zeros as officials stall

AI Detectors Are Failing Students: What NSW Schools Can Do Now

False positives from AI detectors are stripping students of marks, pulling them out of class, and forcing them to "prove innocence" after the fact. Two NSW cases show how policy gaps and process confusion turn a tech signal into a weeks-long ordeal. If you work in education, you need a cleaner, safer playbook-now.

What happened

At Davidson High School, a Year 12 student, Gabe Jones, was twice pulled from class and kept after school to justify a 3,000-word assessment. A detector flagged his work as AI-written. He was told to break down his paper sentence by sentence, while his parents were not informed. "We know you didn't do it, but you need to prove it," he was told.

His father, Trevor Jones, escalated to the NSW Department of Education and then the NSW Ombudsman. The school took three weeks to clear Gabe. The department took five-and-a-half months to say the school did nothing wrong. The ombudsman took four months to say it could not act until the department finished its review.

In regional NSW, Armidale Secondary College gave a Year 11 student, Sophie, two automatic zeros after an AI detector said her work was "100 per cent AI." She was allowed to redo one task-but only as a supervised, pen-and-paper hour in the library. She described it as embarrassing and punitive. Her mother said parents were kept in the dark and told there was "nothing you can do."

Policy confusion

Many public schools are using AI detectors. They aren't recommended, but they are allowed. An earlier version of a NSW government page stated that no tool can reliably detect AI text and that outputs should not be used as evidence. Schools report no formal directive to stop using detectors, and responsibility keeps bouncing between school leaders and the department.

Why detectors misfire

AI-text detection remains unreliable, with high false-positive rates and known bias, especially for certain writing styles and non-native speakers. Even OpenAI discontinued its own classifier due to low accuracy. See: OpenAI: AI Text Classifier (discontinued).

Global guidance stresses that detector scores should not be used as sole evidence. UNESCO's recommendations point to human-led judgement, transparent processes, and assessment redesign. See: UNESCO: Guidance for generative AI in education and research.

The human cost

Students report being named in front of peers, earning instant zeros, and being discouraged from appealing. Rumours spread fast. Anxiety builds. Trust drops. Parents say they're excluded until late in the process, if at all.

What schools can do now

  • Stop using detector scores as evidence. No automatic zeros. Treat detections as a prompt for human review, not a verdict.
  • Ask for evidence of learning, not confessions. Draft history, planning notes, citations, quick oral checks, and short in-class writing samples.
  • Use a clear protocol. Notify the student and parent/carer within 24 hours, assign a case manager, outline rights, and set timelines for resolution.
  • Redesign high-stakes take-home tasks. Add in-class checkpoints, staged drafts, and mixed-method submissions to create authentic proof of work.
  • Keep conversations private. Protect student dignity. Avoid public callouts and whole-class caveats about "AI flags."
  • Document decisions. Store notes from meetings and checks. Keep raw detector outputs out of permanent records unless validated by human review.
  • Equity check. Monitor outcomes for EAL/D learners and students with disability. Calibrate expectations across faculties.
  • Invest in staff capability. Provide PD on AI-aware assessment design and academic integrity. For structured options, see professional learning by role.

Suggested policy guardrails (system level)

  • Ban the sole use of detectors for academic penalties.
  • Require transparent tool documentation, known error rates, and clear data handling rules.
  • Default to formative verification steps before any penalty.
  • Set response timelines and a standard appeals pathway that involves parents.
  • Limit storage and sharing of detector outputs; prioritise privacy.
  • Publish annual integrity reports with de-identified data on allegations, methods, and outcomes.
  • Fund statewide PD on AI-aware assessment and integrity practice.

Expert view

Teaching and technology experts argue the current trial-and-error approach is hurting students. The shift is simple in principle: spend less time chasing cheats and more time collecting evidence of learning. That requires coordinated national action, consistent policy, and investment.

Bottom line

An arms race against AI detectors is a dead end. Build assessments that assume AI exists, make integrity checks humane and timely, and keep families in the loop. Clear policy and better design will do more for learning-and fairness-than any detector score ever will.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide