"You're basically stupid if you don't use ChatGPT." So what should we outsource to AI in higher education?
A Leiden professor ran a bold experiment: one graduating student did all supervision with AI instead of a human. It worked better than many expected. The backlash was loud, but the real question is practical: what work belongs to AI, and what learning must stay with the student?
We've been here before with the internet. Knowledge moved from memory to search. Now, parts of thinking are moving from brain to machine. Pretending students won't use AI is wishful thinking.
The campus reality
Use is widespread and growing. Recent national stats show strong uptake among young people, and those numbers are already outdated. See CBS for background.
Many students already draft, ideate and format with AI-often against existing rules. Quality has improved in some cases because tools remove friction. The risk is clear too: students may skip learning moments if we don't redesign the work.
What AI should do vs. what students must do
Outsource to AI (with verification)
- Idea generation, angles, and counterarguments for a topic.
- Outlines, section plans, and alternative structures.
- Email drafts, summaries, style edits, clarity rewrites, and grammar.
- Reference formatting and citation checks (student verifies sources).
- First-pass literature discovery prompts (student confirms relevance and accuracy).
- Data cleaning scripts, boilerplate code comments, and figure captions.
- Checklists, critique prompts, and mock viva questions.
Keep human-owned
- Problem framing, research questions, and the rationale for choices.
- Methodology design, ethics, and data collection.
- Interpretation of results, synthesis across sources, and the final argument.
- Original contributions, reflection on limits, and implications.
- Academic integrity decisions and responsibility for the whole work.
Guardrails for AI-assisted theses
- Disclosure: require an "AI use" section listing tools, prompts, and where they influenced the work.
- AI appendix: include key prompts and relevant outputs to show process, not just product.
- Source verification: students must retrieve and cite real sources; no unverified AI references.
- Ownership checks: oral defenses and in-person spot questions confirm understanding.
- Unique evidence: fieldwork, interviews, lab work, or original datasets anchor the thesis.
- Privacy: forbid inputting sensitive data into commercial tools; prefer institution-vetted or on-prem solutions.
- Misuse policy: define banned uses (ghostwriting entire chapters, fabricated citations, synthetic data without disclosure).
Assessment that still measures thinking
- Short, timed in-class writing sprints to sample the student's voice.
- Structured vivas: ask "why this method, not that one?" and "what would change if assumption X fails?"
- Traceability tasks: annotate one chapter with "decision logs" explaining key choices.
- Random oral spot checks on submitted sections.
- Rubrics that reward reasoning, judgment, and evidence quality-not just fluent prose.
- Reflective memo on how AI shaped ideas and where it was misleading.
Use AI as co-supervisor, not ghostwriter
- Ask AI to propose three alternative outlines, then discuss trade-offs with the student.
- Prompt for counterarguments, failure cases, and missing variables.
- Have AI critique the method section for validity threats, then validate those critiques yourself.
- Time-box: 10 minutes with AI, then 20 minutes of student-only reflection to decide what to keep.
- Be cautious with commercial tools for core supervision, as long-term dependency and data risks are real.
Ban or allow? Choose the middle path
Blanket bans don't work. Students will use the tools anyway. The calculator lesson applies: teach fundamentals first, then allow tools with guardrails and clear assessment of understanding.
- Early-stage courses: low or no AI use to build core skills.
- Mid-stage: guided AI use with disclosure and checks.
- Capstone: AI allowed for support, but ownership proven through defense and unique evidence.
90-day rollout for programme leads
- Weeks 0-2: draft policy, define allowed/forbidden uses, set disclosure template.
- Weeks 3-6: pilot in two courses; collect student and staff feedback.
- Weeks 7-10: train supervisors; share prompt patterns and risk checklists.
- Weeks 11-12: update rubrics, add oral checks, require AI appendices.
- Week 13+: review outcomes; adjust per discipline.
Oversight is catching up
Accreditation panels are not yet set up to review AI-related practices. See the NVAO for current standards. Scientific integrity codes are being revised; the KNAW is working on how AI should be included.
What this means for the value of a diploma
Some students will try to slide by. Strong processes make that harder: disclosure, oral checks, and unique evidence anchor real learning. A thesis shouldn't be written by AI, but parts of the thinking process can be accelerated-if the student is still the owner.
Writing can help students think, but it isn't the only way to learn deeply. Fieldwork, creation, and real-world observation often teach more than polishing paragraphs.
For educators who want practical help
- Curated AI upskilling by role: Complete AI Training: Courses by Job
Your membership also unlocks: