Everyone Is Cheating Their Way Through College
Chungin “Roy” Lee started at Columbia University last fall and admitted to using generative AI to cheat on nearly every assignment. As a computer science major, he relied heavily on ChatGPT for programming classes, submitting AI-generated work with only minor personal edits. Lee estimated that AI wrote about 80% of his essays, leaving him to add roughly 20% of his own voice.
Lee's academic journey was unconventional. After losing a Harvard admission due to disciplinary issues, he spent time at community college before transferring to Columbia. His personal essay, crafted with AI assistance, framed his setbacks as a motivational story about ambition and entrepreneurship.
Once at Columbia, Lee didn’t focus on traditional academics or GPA. He viewed most assignments as irrelevant and easily bypassed with AI help. “They’re hackable by AI, and I just had no interest in doing them,” he said. For him, the Ivy League’s value lay more in networking—meeting future co-founders and partners—than in coursework. By semester’s end, he teamed up with a fellow student, Neel Shanmugam, to develop startup ideas, though none gained traction.
Lee’s entrepreneurial spirit led to a controversial innovation: a tool that hides AI use during remote coding interviews, allowing candidates to cheat. They launched the tool with a provocative banner, and Lee even posted a video showing how he used it during an Amazon internship interview (which he declined). Columbia caught wind of this and placed Lee on disciplinary probation for promoting cheating technology.
Despite Columbia’s AI policies—restricting AI use unless explicitly approved—Lee said every student he knows uses AI to cheat. He believes the definition of cheating will soon change as AI becomes a standard part of student work.
AI Use Is Ubiquitous Among Students
A 2023 survey showed nearly 90% of college students had used ChatGPT for homework within two months of its launch. Since then, AI tools have become embedded in the academic workflow: taking notes, creating study guides, summarizing readings, and drafting essays. STEM students use AI to automate research, data analysis, and coding assignments.
Sarah, a freshman at Wilfrid Laurier University, first used ChatGPT to cheat in high school and continued through college. She credits AI for dramatically improving her grades and efficiency, writing essays in two hours that used to take her 12. AI use is so common among her peers that she rarely sees laptops without ChatGPT open.
Teachers Struggle to Adapt
Some educators have attempted to counter AI cheating by using in-person exams or oral assessments. Brian Patrick Green, a tech-ethics scholar, stopped assigning essays after witnessing how easily ChatGPT could generate them. Yet students persist in using AI even for personal reflections or introductions, revealing the challenge of policing AI use.
Cheating itself isn’t new, but AI has drastically lowered barriers. Many students see no risk in using tools that simplify assignments with minimal effort. Troy Jollimore, a professor at Cal State Chico, warns about students graduating without essential literacy or cultural knowledge. The rapid adoption of AI is creating a generation whose learning experiences are fundamentally altered.
Pre-ChatGPT Cheating and the Current Landscape
Before ChatGPT, tools like Chegg and Course Hero already enabled widespread academic dishonesty. These platforms offered quick access to textbook solutions and expert answers for a monthly fee. ChatGPT’s launch introduced a faster, more powerful alternative, but universities have struggled to regulate AI use effectively.
Most schools leave AI policies to individual professors. Some allow AI with citation, others restrict it to conceptual help, and some ban it entirely. Students often interpret these guidelines loosely, sometimes unknowingly violating policies by using AI to polish drafts or find citations.
How Students Use AI to Cheat
Wendy, a finance freshman, opposes outright plagiarism but routinely uses AI to generate essay outlines and organize ideas. She feeds her professor’s instructions into AI and receives structured outlines, which she then fills in herself. While she values the learning process, she prioritizes good grades and admits AI reduces her need to think deeply about essay planning.
Ironically, Wendy’s latest essay discussed critical pedagogy and the role of education in fostering critical thinking—a topic she approached with AI assistance. She acknowledges AI’s potential to diminish critical thinking but feels dependent on it now.
Detecting AI-Generated Work Remains Difficult
Professors often spot AI writing by its robotic tone, balanced arguments, or repeated phrases. Some try embedding odd phrases in assignments as “traps” to catch AI-generated content. However, students quickly share these traps on social media, reducing their effectiveness.
Studies show professors frequently fail to detect AI-written essays, with one study revealing a 97% miss rate. AI detectors like Turnitin provide likelihood scores of AI usage but vary in accuracy and can produce false positives, especially for neurodivergent students or non-native English speakers. Students use multiple AI tools to “launder” text and evade detection.
Rethinking Education in the AI Era
Most educators have realized that policing AI use alone won’t solve the problem. Cheating often correlates with students’ mental health, anxiety, and exhaustion. Some teachers express frustration and despair as they grade essays full of errors and superficial content produced by AI.
Addressing AI’s impact on learning may require a broader approach—shifting how education assesses student understanding and engagement beyond traditional essays and exams.
For educators and writers interested in understanding AI’s role in education or seeking practical AI skills, exploring specialized AI courses can provide valuable insights and tools. Visit Complete AI Training for resources on AI applications in various fields.
Your membership also unlocks: