A Writing Professor Teaches Students When to Stop Using AI as a Shortcut
A writing professor at Babson College has shifted her approach to generative AI in the classroom after watching students produce polished work that revealed nothing about their actual thinking. Rather than ban the tools outright, she now teaches students to recognize when AI accelerates learning and when it replaces it.
The change reflects a broader challenge facing educators: more than half of teenagers already use AI for schoolwork, according to Pew Research Center data. By the time they reach college, many have developed habits around these tools that may undermine their development as thinkers.
The Early Experiments
In spring 2023, the professor asked students to use ChatGPT to research their favorite musical artist, then fact-check the results. The tool's errors were instructive: it invented tour dates, scrambled album releases, and performed worst with less-documented artists. One student threw up her hands and said, "It lies!"
That moment opened a useful discussion about whose knowledge gets represented in training data and whose gets left out. But by fall 2023, the professor found herself grieving what she called "the passing of the pre-AI-everywhere world."
The Problem With Cognitive Shortcuts
Research published in late 2024 in the British Journal of Educational Technology identified a specific problem: students using ChatGPT improved their essay scores in the short term but showed no meaningful gains in knowledge. They developed what researchers called "metacognitive laziness"-a dependence on the tool that undermined their ability to think deeply and regulate their own learning.
The issue is that students often cannot tell when they are offloading their own thinking to a machine. They may not realize they've skipped the intellectual struggle that builds understanding.
Teaching Judgment, Not Rules
The professor now frames her role differently. She is no longer a neutral observer but a guide with expertise in what rigorous thinking looks like in her discipline. She knows the difference between a paper that has moved through genuine intellectual struggle and one that has been assembled.
Her approach: ask students to write without AI in some assignments, not as a purity test, but to establish a baseline. Understanding what AI does to your thinking first requires knowing what your thinking can do without it.
In practice, this means students draft with and without AI, compare versions, and justify their choices out loud. They notice when the tool accelerates routine work and when it flattens complexity into something generic.
The Uncomfortable Middle
This approach sits in what Auburn University professors describe as an "unsettled middle"-neither fully embracing nor refusing the technology, but engaging with it critically. It is uncomfortable work, but that discomfort is where learning happens.
Many college students arrive already anxious and optimized for grades rather than learning. They have spent years producing right answers instead of wrestling with hard questions. Before they can develop discernment about any tool, they need something more foundational: trust in their own thinking.
Students who learn to tolerate slowness and mess in their thinking develop the judgment to decide when a shortcut is strategic and when it undermines their work. That judgment will matter far more than any tool they encounter.
For writers working in professional settings, understanding this distinction matters too. AI for Writers resources can help you develop the same discernment-knowing when AI serves your work and when it replaces the thinking that makes your writing worth reading.
Your membership also unlocks: