Colleges Turn to Oral Exams as AI Makes Written Work Unreliable
Cornell University biomedical engineering professor Chris Schaffer requires students to defend their problem sets in person. No laptops, no papers-just direct conversation with an instructor. The method, rooted in Socratic questioning, is spreading across U.S. colleges as faculty confront a concrete problem: students submit flawless written assignments they cannot explain.
Faculty are observing a troubling pattern. Take-home essays arrive polished and complete. When asked to discuss the work, students draw blanks. The shift raises questions about whether students are developing critical thinking skills or outsourcing intellectual work to generative AI tools.
The Problem Instructors Face
Emily Hammer, an associate professor of Middle Eastern Languages and Cultures at the University of Pennsylvania, pairs oral exams with written papers in her seminars. She said students are "losing skills, losing cognitive capacity and creativity."
Hammer forbids AI on all writing assignments but acknowledges enforcement is impossible. Instead, she makes the stakes clear: students who haven't written their own work will face a stressful oral defense. The University of Pennsylvania has launched faculty workshops on oral exams as part of what administrators describe as a "massive shift toward in-person assessments."
This approach differs from tradition in American higher education. Oxford and Cambridge universities have long used oral formats, and some U.S. schools experimented with them during the COVID-19 pandemic to prevent online cheating. Interest intensified after ChatGPT's launch in late 2022.
Methods Vary Widely
At New York University, faculty are requiring office hours, assigning presentations, and cold-calling students in class. Clay Shirky, NYU's vice provost for AI and technology in education, said instructors want to ask directly: "Do you know this material?"
One NYU professor is using AI itself to conduct oral exams. Panos Ipeirotis, who teaches AI product management at Stern School of Business, deployed an AI voice agent that conducts final exams. Students log in remotely and answer questions from a cloned professor voice. The chatbot drills into details based on responses, offers feedback, and flags whether students actually contributed to group projects.
Student feedback was mixed. Andrea Lui, a business major, found the voice surprisingly human but noted the conversation felt choppy with odd pauses and confusing multi-part questions. "It felt kind of awkward to be talking to what was pretty much a blank screen," she said. Still, she agreed with educators that "there is no perfect world where AI exists and kids are not abusing it."
Ipeirotis plans to use the tool in all future classes and wants to pair oral exams with every written assignment. "I don't trust written assignments anymore to be the result of actual thinking," he said.
Student Response and Concerns
Cornell's Schaffer splits 20-minute oral defense sessions with teaching assistants, grading only the conversations-not the written problem sets. His approach incentivizes students to actually understand the material.
Some students initially felt nervous. Cornell junior Olivia Piserchia, a biomedical engineering major, found the oral defense "nerve-wracking" at first but came to value one-on-one time with instructors. "It's a lot harder to look people in the eyes and say out loud, 'I don't know this,'" she said. The accountability pushed her to study more thoroughly.
Critics raise a valid concern: oral exams can be unsettling for shy students or those with serious anxiety. Carolyn Aslan, who leads Cornell's oral exam training, said clarifying the format ahead of time and starting with easy questions helps. "Sometimes it's actually good to get that quiet student one-on-one, and you finally get to hear from them," she said.
Broader Implications
Educators worry that students who skip the mental struggle required for problem-solving won't develop skills needed for upper-level classes and careers. The shift reflects a fundamental challenge: how to assess what students actually know when AI can produce convincing written work.
For education professionals, this represents a practical pivot. Institutions are moving beyond assuming students will use AI responsibly toward building assessment methods that make cheating visible and learning verifiable. Whether oral exams become standard or remain supplementary, they signal that written work alone no longer proves understanding.
Learn more about AI for Education or explore the AI Learning Path for Teachers to understand how educators can adapt assessment strategies in response to AI.
Your membership also unlocks: