AI boosted homework grades, hurt exam scores - U of T professors change how they teach

Students lean on chatbots for take-homes, then stumble in exams. Professors shift to process-focused, in-person checks, live tasks, and clear AI rules to keep learning honest.

Categorized in: AI News Education
Published on: Dec 01, 2025
AI boosted homework grades, hurt exam scores - U of T professors change how they teach

AI Is Reshaping Student Work. Teaching Has to Catch Up

Two years ago, University of Toronto Scarborough associate professor AndrΓ© Cire spotted a sharp split in his data science class: take-home grades shot up while in-person final exam scores dropped. The likely cause was straightforward-students leaned on large language models (LLMs) like ChatGPT and Gemini for assignments, then struggled without them in the exam hall.

And it's not isolated. Polling suggests about one in six Canadian students over 18 used AI tools in their coursework in 2024, up 13 percentage points year over year. The bottom line for educators: AI use is here, it's growing, and the job is to channel it into learning rather than shortcuts.

What Professors Are Seeing

Cire says student writing now reads more "fancy," with extra adjectives and polish. Code submissions are longer and more sophisticated, straying from class examples-classic LLM fingerprints.

Computer science professor Karen Reid notes another challenge: you can't reliably detect AI-generated text at scale. One student even submitted a line straight from ChatGPT-"Oh certainly, I can give you another set of examples"-but that kind of slip-up is rare.

The university's code flags unauthorized AI use as an offence when a professor prohibits it. Cases involving "unauthorized aids" have risen since 2020, but cheating also spiked during the pandemic before mainstream LLMs, so AI is part of the picture-not the whole story.

Beyond Plagiarism: Shifts in Student Behavior

Rahul Krishnan, assistant professor of computer science, reports fewer questions on course forums. Reid is seeing lighter traffic in office hours. It makes sense-LLMs answer at 2 a.m. when faculty can't.

But there's a catch. These tools can be overly agreeable, which can inflate a student's sense of understanding. That confidence evaporates under exam conditions. As Krishnan puts it, the risk is students stop learning how to learn.

Assessment Redesigns That Actually Work

Several faculty at U of T are shifting from output-only grading to evaluating the process, interaction, and in-the-moment reasoning.

  • Make it personal and specific: Cinema studies professor Bliss Cua Lim assigns interviews with relatives about past cinema experiences. Students tend to do the real work because the task is interesting and grounded in lived history.
  • Bring problems into the room: Cire invites business leaders to class and has students question them live. Generic case prompts are easy to feed into an LLM. Real people with real constraints aren't.
  • Split assignments into evidence + response: Lim asks for annotated readings (color-coded highlights of claims and supports), then a short response. Students show what they actually saw before offering opinions.
  • Weight in-person checks: Cire pairs assignment deadlines with in-class quizzes and spot oral questions. These now make up about 30% of the grade. Since the shift, he's seen grades stabilize and engagement rise.
  • Increase midterms: More frequent, smaller in-person assessments reduce the payoff of outsourcing a single take-home.

Practical Steps for Educators

  • Set clear AI policies per assignment. What's allowed? Brainstorming, outlining, debugging, citations, code comments, translation? Require students to disclose use and prompts.
  • Grade for process, not just product. Ask for reading annotations, drafts with revision notes, prompt logs, reasoning steps, version history, or notebook cells with commentary.
  • Use short oral defenses. Random 3-5 minute viva-style checks: "Walk me through your approach." It deters over-reliance and deepens mastery.
  • Design "LLM-resistant" tasks. Local data, live stakeholders, time-boxed activities, or unique artifacts (interviews, observations, lab captures) make generic AI outputs less useful.
  • Keep human support accessible. Offer structured office hours, small-group consults, and peer review. If students default to AI at 2 a.m., make your support easy and predictable.
  • Teach AI literacy. Model how to critique AI outputs, verify claims, and document use. Treat it like a calculator with opinions-useful, but it can be confidently wrong. See UNESCO's guidance on generative AI in education for guardrails: UNESCO resource.
  • Align with policy. If you prohibit AI on an assessment, say it clearly and link to your institution's code. For reference: U of T Code of Behaviour on Academic Matters.

Sample Course Pattern You Can Pilot Next Term

  • Weekly: Reading with highlighted claims/support + a 200-300 word reflection noting 1 question they'd ask the author.
  • Biweekly: In-class quiz on core ideas; 10-minute small-group discussion where each student defends one decision they made on the last assignment.
  • Project: Partner with a local organization or internal stakeholder; require stakeholder Q&A and a short oral defense.
  • Transparency: AI use log (what tool, for what step, what changed after verification). Penalize undisclosed use, not thoughtful use.
  • Final: Cumulative midterm-style checks over a single high-stakes exam.

What to Watch Next

Cire expects to keep changing his assessments as tools improve. That's the job now-iterate faster than shortcuts spread. If the work requires presence, evidence, and explanation, students learn. If it's generic, an LLM will do it faster.

Educators don't need to block AI to protect learning. We need to make learning visible, test reasoning in real time, and reward the path, not just the output.

Resource for Upskilling

If you're building AI policies, assignments, or training for your department, you can scan role-specific options here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide
✨ Cyber Monday Deal! Get 86% OFF - Today Only!
Claim Deal β†’