GenAI in Higher Education: What We Should Assess Now
Generative AI isn't a side project anymore. Students and professors are already using chatbots in courses, and that changes how we teach, learn and grade.
The core question has shifted: what should we assess when human cognition can be augmented or simulated by an algorithm?
AI and Academic Integrity: Help and Headaches
AI is a mixed bag. Advanced translators and text generators can imitate human writing, which makes detection tough. They can also produce false claims and repeat social biases drawn from training data.
At the same time, AI can widen access. It supports learners with disabilities and students using an additional language. Because blocking every tool is unrealistic, the smarter move is to update policy and train everyone to use AI responsibly while protecting academic integrity.
From Enforcers to Stewards of Learning
Educators in our study framed themselves as stewards, not police. They focused on what supports learning versus what substitutes for it. Three skill areas stood out: prompting, critical thinking and writing.
Prompting: A Legitimate, Assessable Skill
Prompting is more than typing a question. Strong prompts require clear thinking, concept understanding and precise communication. Poor prompts lead to poor outputs, which forces reflection and revision-valuable cognitive work.
Two conditions kept prompting ethical: transparency and a foundation of one's own knowledge. Without both, prompting can slide into over-reliance or uncritical use.
Critical Thinking: Put AI on the Table
Chatbots produce plausible text that can miss key facts or fabricate details. That's an opportunity. Many educators now use AI outputs as material for critique-have students test claims, check sources and assess coherence.
In a future where algorithmic content is everywhere, it would be unethical not to teach students how to interrogate it.
Writing: Where Boundaries Tighten
Educators drew firm lines between brainstorming, editing and composition. Brainstorming with AI is acceptable as a starting point, as long as students generate and own their ideas.
Editing with AI (like grammar or clarity) is acceptable after students write original text and can judge revisions. Concerns remain about language standardization and the loss of authentic voice.
Having AI draft arguments or prose was largely rejected. The generative phase of writing is a human cognitive process. Skipping the productive struggle undermines original thought.
Living in a Post-Plagiarism Era
Co-writing with GenAI doesn't automatically equal plagiarism. Honesty still matters. The shift is recognizing that human-AI co-creation can be legitimate when it's disclosed and aligned with learning goals.
We're not discarding integrity; we're redefining how students show their thinking in a hybrid cognitive environment.
Five Design Principles for Ethical, Valid Assessment
- Explicit expectations: Clearly state if, when and how GenAI can be used for each task. Ambiguity leads to unintentional misconduct and erodes trust.
- Process over product: Grade drafts, annotations, prompt logs and reflections. Make thinking visible.
- Tasks requiring human judgment: Prioritize evaluation, synthesis and context-specific decisions.
- Develop evaluative judgment: Teach students to spot AI's limitations, biases and errors.
- Preserve student voice: Assess how students know what they know, not just the final wording.
Practical Moves You Can Use This Term
- Add an AI use policy to your syllabus with concrete examples of allowed and disallowed uses for each assignment.
- Require a prompt log or brief reflection: what the student asked, what they got, how they revised, and what they kept or rejected-and why.
- Use AI as a foil: provide an AI-generated summary or argument and have students critique accuracy, logic and evidence.
- Differentiate writing phases: allow AI for brainstorming and post-draft editing, prohibit AI for first-draft composition unless explicitly permitted.
- Adopt staged submissions (outline → draft → revision) to emphasize thinking over polish.
- Update rubrics to include originality of reasoning, quality of sources, and critical evaluation of AI outputs.
- Offer clear guidance for students using AI for accessibility or language support while protecting individual voice.
Preparing Students for a Hybrid Cognitive Future
The goal isn't to fear AI; it's to use it to strengthen assessment, integrity and learning. Students should understand both what GenAI can do and where it falls short-hallucinations, oversimplifications and bias.
Post-plagiarism is less a crisis and more a reframe of how knowledge is constructed and demonstrated when humans and systems think together.
Further Reading and Tools
Your membership also unlocks: