AI on Campus Is a Patchwork. Here's How Educators and Student Writers Can Work Smarter
University of Houston freshman Ava Romero doesn't use AI much for classwork - and when she does, her professors call the shots. In English and government, she must use approved tools and keep use under 20%, based on the school's detection software. In history, AI is banned. One slip could violate academic honesty rules.
That split isn't rare. From UH to Rice to Texas A&M, professors set their own rules, and students are stuck decoding what's allowed this week in this class. It's messy, and it's already changing how people teach and learn.
What students see on the ground
One day in history class, Romero saw ChatGPT glowing from nearby laptops. "Even though I feel like it is our future, I don't really trust it," she said. "I like using my own brain."
Sophomore Andre Orta treats AI as an ongoing test. It helps him organize calculus. In physics, not so much. If he used it to spit out answers without learning, he knows it would show up on exams: "They want to make sure that everybody gets it."
Shalom McNeil, a Texas Southern University senior, leans on AI to organize notes or break down tough math but not to script news packages for broadcast journalism. "If you demonize it, you're getting left behind," he said.
Faculty playbooks: from integrated to off-limits
Policies span the full range. Some professors build AI into assignments, with guardrails and transparency. Others allow grammar edits or brainstorming but not full-draft generation. Some require students to share prompts and short reflections.
At Rice, associate teaching professor Risa Myers allows AI on homework under clear rules: no methods not taught in class, include prompts, and add a four-sentence reflection. Homework counts less now, and there are more quizzes. "It makes them keep up with the class," she said.
On the other end, UH history professor Robert Zaretsky moved back to hand-written blue book essays. "You can't find your voice as a writer through AI," he said.
Detectors overreach - and everyone pays
AI detection tools are shaky. UH professors described Turnitin over-flagging work, sparking stressful academic honesty cases. In one case, a student who wrote with a tutor got a "100% AI" warning. Students now send version-history videos to show their process.
Some universities avoid AI detectors entirely, citing limited proof they work. Even where they're used, several administrators say detectors alone shouldn't be the basis for an academic dishonesty charge. Independent checks and conversations matter. For context, see statements on limits from Turnitin and prior notes on low accuracy from OpenAI.
What this means for educators
Blanket campus rules won't cut it. Learning outcomes differ by course, so faculty need room to set expectations. As Rice Provost Amy Dittmar put it: the tool can help or hurt, depending on how it's used for the outcomes you care about.
The goal: reduce confusion, protect integrity, and teach modern skills without letting shortcuts replace thinking.
Practical guardrails you can adopt this term
- State allowed vs. banned uses: Allowed: grammar, spelling, summarizing notes, outlining, explaining concepts, test-prep questions. Banned: full-draft generation, using methods not covered in class, uncredited edits, or "prompt and paste."
- Require light transparency: Share prompts and the top response, plus a 3-4 sentence reflection on what changed after review. Allow screenshots or pasted logs.
- Shift assessment weight: More in-class quizzes, short oral checks, or whiteboard explanations. Keep homework meaningful but lower stakes.
- Collect process, not just product: Drafts, outlines, citations, and version history. This makes original work easier to verify and teach.
- Protect fairness: Make it clear detectors are advisory, not verdicts. If flagged, use a conversation and process evidence before any penalty.
- Design prompts that reward thinking: Personal context, local data, course-specific sources, or steps that require intermediate work and reflection.
- Offer non-native writers support: Allow grammar/tone assistance with disclosure. Grade on ideas and structure, not just polish.
- Plan for take-home integrity: Randomized problem sets, unique datasets, or rotating case details. If needed, occasional blue book or lab-style checks.
For writing-heavy courses
- Voice work: Use in-class freewrites and short reflections to build a baseline style over time.
- Scaffolded essays: Proposal → annotated notes → draft → conference → revision. Require citations and examples tied to class materials.
- AI as editor, not author: Permit grammar/tone edits with disclosure. Ban full paragraph generation without revision and citation.
- Quick oral debrief: Two-minute "explain your draft" check prevents ghostwriting and deepens learning.
For STEM and code-heavy classes
- Guard the fundamentals: If AI helps debug or outline, students still need to explain what the code does and why.
- Limit beyond-syllabus tricks: If a technique hasn't been taught, it's off-limits unless documented and explained.
- Pair programming with oral checks: Short viva-style questions on core concepts keep skills honest.
Student playbook: use AI without risking your grade
- Know the rules per class: Don't assume carryover. When in doubt, ask.
- Keep a paper trail: Save prompts, drafts, and a few notes on what you accepted or rejected.
- Use it to learn, not skip: Summaries, outlines, and example questions are fine if allowed. Final answers should be yours.
- Pressure test yourself: Do a quick self-quiz or explain your work out loud. If you can't teach it, you don't have it.
Why the tension won't disappear
Some professors worry AI dulls core skills. Others worry it changes what "learning" even means. As UH's Lauren Zentz put it, the fear of being unfairly flagged can create a "minefield" for well-intentioned students - while actual cheating makes it worse for everyone.
There's no single fix. But clear rules, light disclosure, smarter assessment, and real due process go a long way.
Quick-start policy template you can copy
- Permitted: Grammar/spelling fixes; outlining; study questions; clarifying concepts; debugging guidance (if annotated).
- Prohibited: Generating final drafts or answers; using methods not taught; hiding AI use; uploading assignments for full rewrite.
- Disclosure: Paste prompts and top outputs at the end of your submission plus a 3-4 sentence reflection on changes you made.
- Assessment: Expect in-class quizzes, brief oral checks, and draft submissions with version history.
- Academic integrity: Detector results are advisory only. Allegations require a meeting and review of your process materials.
A last word from the classroom
Some students will still push the line. Professors are already getting creative - from hidden instructions that reveal AI-written text to hand-drawn problem sets that tools can't parse. As one student said, "Professors are getting smarter."
If you teach, set clear rules and keep the learning outcomes front and center. If you write, use the tools to think better, not to outsource your voice.
Useful resources
- AI course picks by job role for educators and writers
- Prompt practices that support clarity and transparency
Quotes credited to the individuals named in this article.
Your membership also unlocks: