University of Michigan student sues over AI-cheating accusation, says disabilities shaped her writing

An Ohio student is suing UM after being accused of AI-written papers; she says her style and anxiety/OCD were misread. The case flags shaky detectors and urges writers to keep proof.

Categorized in: AI News Writers
Published on: Feb 13, 2026
University of Michigan student sues over AI-cheating accusation, says disabilities shaped her writing

UM student sues over AI cheating accusation - what writers should know

An Ohio student identified as Jane Doe has sued the University of Michigan, alleging she was falsely accused of using AI to write course papers. The federal lawsuit, filed Feb. 9 in Detroit, says a graduate student instructor in "Great Books 191" charged her with academic misconduct because her writing "looked like AI."

According to the filing, Doe has generalized anxiety disorder and obsessive-compulsive disorder and shared documentation with the university. She argues traits like a formal tone, tight structure, consistent style, and visible stress during confrontations were misread as signs of AI use.

The accusation and its fallout

The lawsuit states the graduate assistant began filing misconduct charges in fall 2025. Despite Doe providing evidence explaining how her disabilities shape her writing, she received a "no record" grade for the course.

That mark, the suit claims, damaged her academic standing and made transfers or grad school applications more difficult. The university said it does not comment on pending litigation.

Claims of bias about AI

Doe also alleges the assistant showed public bias toward assuming AI misuse. The lawsuit cites posted statements including, "If a university cannot stand up for its values against AI then death is only a mercy," and, "I fear that grading has made me paranoid and inclined to see AI everywhere."

The student has appealed the misconduct charge and filed a complaint with the U.S. Department of Education Office for Civil Rights. The appeal is paused while that civil rights case proceeds. The federal case is assigned to U.S. District Judge Laurie Michelson; no hearings are currently scheduled.

Why this matters for writers

AI detectors and "style-matching" judgments can be wrong-especially with highly structured or uniform prose. This is not hypothetical: research has shown detectors can misclassify human writing, and even show bias against certain writers.

If your livelihood depends on words, you need a paper trail that proves ownership and process. Don't rely on anyone's gut check, including your own.

Protect your work: practical steps

  • Keep version history: Draft in tools that timestamp edits (Google Docs, Notion, Git, or local files with autosave). Export dated PDFs of major drafts.
  • Capture your process: Save outlines, notes, sources, and brainstorming snippets. Screenshots and time-stamped files help.
  • Use local logs: Keystroke or session logs (e.g., macOS/iOS Screen Time, Windows Focus Sessions) can show active writing time.
  • Vary the surface: If your style is extremely uniform, add small, authentic variation-sentence length, rhythm, and phrasing-without losing clarity.
  • Document any accommodations: If you have disabilities that affect writing style or communication, file documentation early with your institution or client.
  • Get policy clarity: Ask for written policies on AI use, disclosure, and evidence standards. Request a human review process, not detector scores alone.
  • If accused: Stay calm, provide drafts and timestamps, and request specific evidence. Ask for an independent reviewer if bias is a concern.

Useful resources

If you use AI in your workflow

Be transparent where required, keep human oversight on structure and ideas, and log your prompts and edits. Treat AI like a drafting assistant, not an author-you're accountable for the final work.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)