MIT writing professor bans AI after students confess to using it to avoid the shame of bad first drafts

An MIT fiction instructor caught two students submitting AI-written stories and turned it into a lesson on why writing's difficulty is the whole point. The struggle to find words, she argues, is how writers discover what they actually think.

Categorized in: AI News Writers
Published on: May 11, 2026
MIT writing professor bans AI after students confess to using it to avoid the shame of bad first drafts

What Happens When Students Stop Writing Their Own Stories

An MIT fiction instructor discovered two students had used AI to generate their workshop submissions. Instead of punishment, she used the moment to teach why the struggle to write matters more than the finished product.

The prose was too polished. The character arcs too tidy. Every metaphor felt borrowed. A lecturer who has taught fiction writing at MIT since 2017 recognized the signs immediately: these stories hadn't been written by her students. They'd been written by an algorithm.

She didn't need detection software. She just knew.

The confession

When she told the class what she'd noticed, one student began to cry. She said she'd used AI because she was terrified of looking stupid, of being criticized for bad writing. She loved writing stories but hated what she'd done. The process had been gradual-first a grammar check, then line edits, then structural suggestions, then a complete rewrite. By the end, the AI had written the story.

The second student had never written a short story before. He had an idea but didn't know how to start, so he fed it to an algorithm instead of asking his professor for help.

Other students pushed back. If the ideas were theirs, why did it matter who wrote the sentences? How was using AI different from hiring a human editor? Wasn't the entire point of AI to make work easier?

Why the struggle is the point

The conversation that followed became one of the most productive teaching moments in eight years of instruction. Writing isn't supposed to be easy, the professor told them. And yes, it can be tedious. But tedium isn't the same as rote work.

Writing trains endurance through sustained attention. It's how people learn what they actually think-by attempting to say it. An LLM can reproduce the appearance of that activity, but it can't replace it. The value lies not just in what gets written, but in what happens to the writer during the process.

AI-generated prose is perfectly mediocre. It reads like a parody of MFA-workshopped writing-inert, regular, splendidly empty. Student writing, by contrast, is gloriously flawed. It stumbles on the page. There's visible struggle between what the author meant to say and what actually got said.

That clumsiness is necessary. Its absence would mean the writer never learned to walk.

The cost of bypassing friction

A 2025 MIT Media Lab study found that people who used ChatGPT to write essays showed lower neural connectivity than those who wrote without assistance. Other research warns of similar cognitive effects: reduced persistence, diminished independent performance, attenuation of executive function.

But you don't need a study to understand the central problem. By letting students routinely use AI to write, institutions weaken their minds. The danger isn't that machines will replace writers. It's that students become accustomed to bypassing the friction that once revealed their process.

George Orwell described this condition in 1946, writing about book reviewers who manufactured responses to texts they'd never actually engaged with. The mindless production of language disconnected from thought deforms judgment. Standards collapse. Orwell couldn't have anticipated that this condition would eventually be automated upstream-that an entire workshop could fill with writing that had no author behind it.

What changed in the classroom

After that night, the workshop shifted. Students now talk more openly about frustration, about moments when a draft resists its own author. The instructor still teaches craft-form, structure, revision. But she finds herself returning to the tension between thought and language, to stories where abstraction refuses to take shape.

The policy is now explicit: no AI-generated work. She wants her students' words. She wants access to their thinking, their voice, their struggle to find what they want to say and the best way to say it.

The workshop only works if there's a writer in the room. Someone whose thinking is visible on the page. Someone who can speak directly to that thinking. Using AI to write nullifies the entire peer review concept. It also guarantees a weakening of the muscles needed to wrestle with language.

What the class now guards isn't a boundary against machines. It's a sanctuary for authorship-a place where everything on the page, and everything not yet on the page, belongs to an actual person.

For writers and writing instructors, the question has become unavoidable: What gets lost when we surrender the struggle to translate thought into words?


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)