A Philosophy Professor Replaced ChatGPT With Himself
A University of Chicago philosophy instructor noticed a troubling shift in his classroom two years into teaching. Students who participated actively in office hours and class discussion were no longer writing the best essays. Those who barely showed up could log into ChatGPT, prompt it correctly, and produce stronger work.
The problem was immediate and concrete: essay assignments no longer measured what students actually understood about philosophy.
Eliminating essays wasn't an option. Philosophy requires students to translate complicated ideas into simple terms - that's where the real thinking happens. Timed, handwritten essays would strip away the chance to research, draft, and revise. That leaves professors in a bind.
The instructor decided to stop fighting AI detection and instead replace the tool entirely. He proposed writing a class essay together, with him acting as a collaborative partner rather than a grader.
How the experiment worked
Students had questions. Would everyone write their own section or collaborate in a shared document? Should they work in subgroups? How would grading work?
When the instructor said the structure was up to them, the class got excited. They felt ownership over an experiment they were designing themselves.
They chose to write collectively and be graded on individual contributions. They wanted the instructor to have veto power - to rearrange, cut, or alter passages without explanation. They didn't want class participation factored into grades, fearing it would make discussions feel performative rather than genuine.
The class voted on essay topics, then flipped a coin. The result: a 10,000-word philosophical essay on artificial intelligence, written across a shared Google Doc with every student contributing.
Students wrote, revised, expanded, and cut simultaneously. They emailed suggestions about essay direction, summarized background readings, and proposed counterarguments. Everyone left a mark on the final piece.
What students reported
Anonymous feedback was overwhelmingly positive. Nearly all comments praised the experience. One student wrote: "I really think it's refreshing to experience something new like this in the classroom. Although sometimes it got stressful, I learned so much not only about the topics at hand, but also about collaboration to its fullest extent."
Another said the experience matched what they thought university should be: exploration and learning with peers, leading to genuine knowledge accumulation.
Most students said they worked harder than in the previous quarter. More than two-thirds said they'd choose to do it again.
The key difference: students felt they were doing actual philosophy, not performing it for a grade. They wanted to submit the essay to an academic journal and continued working toward publication after the course ended.
Why this matters for teaching
The collaborative essay model creates structural disincentives for AI use. Students must participate in every class to understand where the essay is heading. They constantly defend their writing to peers and the instructor. Using ChatGPT becomes inconvenient when you're writing alongside others in real time.
For instructors, the approach transforms the classroom into a self-organizing collaborative space. The instructor learns from students, encounters novel philosophical insights, and gets to know them better than traditional grading allows.
The strategy works particularly well in smaller classes of around 20 students, though the underlying principle applies more broadly: stop having students write for you. Have them write with you.
For educators considering AI for Teachers strategies, this model offers a concrete alternative to detection tools and restrictions. It addresses the root problem - why students turn to AI in the first place - rather than treating the symptom.
Your membership also unlocks: