Adelphi student sues after AI-plagiarism claim: What educators need to know
A first-year student at Adelphi University is suing after being found responsible for using AI to write a history essay "too advanced" for his level. He says the claim is false, and that his writing reflects years of tutoring and documented learning and neurological disabilities. The university cited an AI detection signal from Turnitin, among other factors, then assigned a plagiarism workshop and denied an appeal. A court hearing is scheduled for November after Adelphi moved to dismiss the case.
The case in brief
Professor Micah Oelze reported the essay after Turnitin flagged it as "100% AI" and the writing included terms he felt did not match a typical first-year "voice." The student, Orion Newby, countered that he did not use AI and submitted the paper to two other detectors that did not flag it. Adelphi's academic integrity officer upheld the violation with a first-offense educational sanction, while noting future violations could lead to suspension or expulsion. Newby seeks to overturn the decision and recover tuition and fees.
Why this matters for your classroom
Student AI use is now mainstream. Survey data cited in the case shows roughly nine in ten students report using AI in their studies, and most say usage increased over the past year. Some faculty are tightening assessments with in-class writing or presentations. Others are wrestling with how to investigate suspected misuse without over-relying on automation.
What AI detectors can and can't do
Experts caution that AI detection tools produce false positives and should not be used as the sole basis for discipline. Even Turnitin's own guidance advises educators to corroborate signals with additional evidence. Because generative models produce original text, there's rarely a definitive "smoking gun." Some research and campus guidance also warn these tools may disproportionately flag non-native speakers or writers with atypical patterns.
Disability, "voice," and due process
Newby's family says his writing reflects language and auditory processing disorders and ADHD, supported by years of tutoring and support services. He chose Adelphi, in part, for its Bridges program for students with autism spectrum or related conditions. Cases like this raise tension between subjective judgments of "voice" and the need to account for disability-related differences. Institutions also have obligations under disability and civil rights law to ensure fair procedures and appropriate accommodations; see ADA.gov for baseline guidance.
Action framework for educators and administrators
- Set clear, specific policies: Define allowed vs. banned tools (tutors, grammar support, paraphrasers, and LLMs). Require disclosure of any assistance used.
- Design for process, not just product: Use staged drafts, revision notes, sources logs, in-class writing samples, and brief oral check-ins to establish authorship.
- Use detectors as a signal, not a verdict: Triangulate with draft history, timestamped notes, citation checks, and a short viva-style conversation focused on the submitted work.
- Calibrate penalties to evidence and impact: Favor supervised rewrites or alternate assessments on first offenses when intent is unclear. Reserve severe sanctions for clear, corroborated cases.
- Account for disability: Coordinate with disability services before judging "voice" or vocabulary. Ensure processes align with ADA/Section 504 and syllabus policies.
- Be transparent about procedures: Publish investigation steps, evidence standards, and appeal rights. Document everything.
- Educate for AI literacy: Teach what "appropriate assistance" looks like, including citations for AI-influenced work where permitted. Offer workshops through writing centers.
- Protect privacy and equity: Avoid uploading student work to third-party tools without approval and guard against bias that may impact multilingual or neurodivergent students.
What's next in the Adelphi dispute
Adelphi maintains it followed policy and that its tools are "reliable, accurate and an important tool." The court will hear arguments on the university's motion to dismiss and the student's claims in November. Regardless of the outcome, institutions can use this moment to tighten policy language, shore up investigations, and reduce harm from false positives.
The takeaway
Relying on a single detector creates risk. A layered approach-clear rules, process-based assessment, corroborated evidence, calibrated consequences, and disability-aware review-protects academic integrity and students alike. If your course or department needs structured training on AI policy and classroom practice, explore these resources: AI courses by job.
Enjoy Ad-Free Experience
Your membership also unlocks: