A writing professor explains why she is neither pro-AI nor anti-AI in the classroom

A Babson College writing professor now requires students to identify where their own thinking must enter the work-after research showed AI use improved essay scores but not actual knowledge.

Categorized in: AI News Education
Published on: Mar 17, 2026
A writing professor explains why she is neither pro-AI nor anti-AI in the classroom

How One Professor Teaches Students to Think Before Using AI

A writing professor at Babson College has shifted her approach to generative AI in the classroom-moving from neutral observer to guide with a clear point of view. The change reflects a growing tension in higher education: students arrive with established AI habits, yet lack the critical judgment to know when the tools help or hurt their learning.

The professor's early experiments seemed promising. In spring 2023, she asked students in a social media class to research musical artists using ChatGPT, then fact-check the results. The tool's confident-sounding errors-invented tours, scrambled album dates-became a teaching moment. When a student exclaimed "It lies!", the room erupted. Students quickly grasped the larger problem: whose voices disappear when AI trains on incomplete data?

By fall 2023, her approach shifted. She introduced a required section in research proposals called "Be Better Than a Robot," asking students to identify where their own thinking had to enter the work. The goal was direct: if ChatGPT could write the paper, why spend weeks on it?

The Learning Problem

Recent data shows the stakes. More than half of teenagers now turn to AI for homework help, according to Pew Research Center. By the time they reach college, these habits are entrenched.

A 2024 study in the British Journal of Educational Technology found that students using ChatGPT saw short-term essay score improvements but showed no meaningful gains in actual knowledge. The researchers identified "metacognitive laziness"-students became dependent on the tool and lost the ability to self-regulate and engage deeply. This is cognitive offloading in action.

The professor noticed this pattern in her own classroom. Some student work showed signs of assembly rather than intellectual struggle. The difference was visible to her. It wasn't visible to them.

Teaching Discernment, Not Bans

She rejected a simple ban. Instead, she asks students to write both with and without AI, then compare versions and justify their choices aloud. The goal is noticing: when does the tool accelerate routine work? When does it flatten complexity?

Many students arrive already anxious, already optimizing for grades rather than learning. They've spent years producing right answers instead of wrestling with hard questions. Before they can develop judgment about any tool, they need something more basic: trust in their own thinking.

The professor describes her position as an "unsettled middle"-neither fully embracing nor refusing the technology, but engaging with it critically. Her students often end up in the same uncertain space.

Learning to sit with that uncertainty matters. Tolerating the slowness and mess of thinking things through, rather than reaching for frictionless answers, is where discernment begins.

If students will encounter these tools throughout their careers, ignoring that reality does them no favors. The responsibility is to help them develop judgment: when is a shortcut strategic, and when does it undermine their own thinking?

For educators working through this moment, the question isn't whether to allow AI. It's how to teach students to use it without losing themselves in the process.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)