They Have Their Doubts: What It’s Like to Be in School, Trying Not to Use A.I.
Erin Perry, a graduate voice performance student at the Peabody Institute of Johns Hopkins University, faces a unique challenge: studying classical music while resisting the push to use artificial intelligence (A.I.) in her education. As a classically trained singer, she is wary of how generative A.I. tools threaten the integrity of artistic work, especially with startups that replicate artists' styles without permission.
One assignment highlighted her concerns. Perry was asked to write a program note about a classical piece, then compare it with ChatGPT’s output. She found the A.I.’s description riddled with inaccuracies, wasting her time rather than helping. Despite this, her school actively promotes A.I. use, even launching its own platform, the Hopkins AI Lab, to integrate generative technologies into teaching and research.
For Perry, this push feels contradictory. While students are encouraged to advocate for their artistic value, they are simultaneously urged to adopt tools that could undermine their work. This tension reflects a broader divide among students nationwide.
Widespread A.I. Use—and Resistance
Many students embrace A.I. for writing essays, research, or even cheating, leading to trust issues with educators. ChatGPT usage spikes at the start of academic terms, showing widespread dependence. Yet, there is a less visible group choosing to avoid A.I., driven by principle rather than convenience.
Some resist due to practical concerns: professors using A.I. to grade or plan lessons without transparency, unreliable A.I. text detectors, or losing traditional academic experiences like personalized feedback. Graduate students worry about automation eliminating their roles in academia. These are valid frustrations.
But deeper worries exist. Students question whether education should become a matter of generating quick outputs rather than developing critical thinking. They raise ethical issues about copyright infringement, environmental impact, and the unknown effects of A.I. on young minds.
Voices of A.I. Skepticism in Education
- Sabrina Rosenstock (University of Michigan) criticizes the energy consumption of A.I. and its saturation in classes. In a coding course, auto-completion tools like Google’s Gemini Code Assist hindered her learning by removing hands-on practice. In creative classes, she questions why professors encourage A.I. for idea generation, when students can collaborate creatively on their own.
- Kate Ridgewell (UCLA) avoids A.I. due to environmental concerns and the high rate of hallucinations and biases in outputs. As an archival science student, she worries about the increasing reliance on A.I. for cataloging and the difficult role of verifying its accuracy. She highlights the challenge of teaching digital literacy when students lean heavily on A.I. convenience.
- Kisa Schultz (University of Oregon) shifted to a no-A.I. policy after learning about its environmental cost and recent studies showing reduced brain activity linked to A.I. use. As an English doctoral student, she values original writing and critical thinking, seeing A.I. as encouraging students to skip the mental effort required to learn deeply.
Growing Institutional A.I. Integration
Despite these concerns, institutions are rapidly adopting A.I. tools. The American Federation of Teachers partnered with OpenAI and Microsoft to train educators on A.I. integration. The recent “A.I. education pledge” engages over 60 companies to develop A.I. curricula for K–12 students.
Companies like Google are expanding A.I. tools for classrooms, including chatbots designed for study support and tailored lesson planning. Schools such as Ohio State University now require incoming students to demonstrate A.I. fluency through dedicated courses.
This growing momentum makes it harder for students who want to avoid A.I. in their education. The pressure to conform is strong, but voices like Erin Perry’s remind us of the importance of choice and critical reflection in adopting new technologies.
What Educators Can Take Away
- Recognize that some students resist A.I. for ethical, environmental, and educational reasons, not just convenience.
- Balance A.I. integration with teaching critical thinking and digital literacy to prevent overreliance on automated tools.
- Engage students openly about the benefits and risks of A.I., allowing space for dissenting opinions and principled refusal.
- Consider transparent policies on A.I. use in coursework and grading to build trust and fairness.
As A.I. becomes deeply embedded in education, the challenge is to use it thoughtfully without compromising the core goals of learning and creativity. For those interested in exploring A.I. tools responsibly, resources like Complete AI Training offer courses that focus on practical, ethical applications of artificial intelligence.
Ultimately, education should empower students to think independently—not just produce automated answers.
Your membership also unlocks: