Elon and AAC&U national survey: 95% of college faculty fear student overreliance on AI
A nationwide faculty survey signals a clear message: generative AI is reshaping classrooms, and most instructors do not like the trajectory. Concerns center on overreliance, weaker critical thinking, shorter attention spans, and growing threats to academic integrity and the long-term value of degrees.
At the same time, many faculty see a path forward. They believe AI literacy matters, future jobs will demand it, and higher education must address ethical, environmental, and social consequences with intention.
Key findings at a glance
- 95% say GenAI will increase student overreliance on AI tools; 75% say the impact will be a lot.
- 90% believe GenAI will diminish critical thinking; 66% expect a lot of impact.
- 83% expect shorter attention spans; 62% say the impact will be a lot.
- 86% say teaching roles will be affected; 79% expect their department's teaching model to change, with 43% predicting significant impact.
- 78% report more cheating since GenAI became widely available; 57% say it increased a lot. 73% have personally handled GenAI-related integrity issues.
- 48% say student research has gotten worse due to GenAI; 20% say it has improved.
- 74% believe GenAI will harm the integrity and value of degrees; only 8% expect it to help.
- 63% say Spring 2025 graduates were not well prepared to use GenAI at work.
"Some are innovating and eager to do more; a notable share are strongly resistant; and many are grappling with how to proceed," said Lee Rainie, director of Elon University's Imagining the Digital Future Center. "Without clear values, shared norms and serious investment in AI literacy, we risk trading compelling teaching, deep learning, human judgment and students' intellectual independence for convenience and a perilous, automated future."
"More than nine in ten faculty warn that generative AI may weaken critical thinking and increase student overreliance," added Eddie Watson, vice president for digital innovation at AAC&U. "These findings call for intentional leadership so that human judgment, inquiry, and learning remain central."
Faculty are taking action, but institutions lag
- 69% address AI literacy topics in class: bias, hallucinations, misinformation, privacy, and ethics.
- 61% think GenAI could enhance or customize learning in the future.
- 87% have explicit course policies on acceptable and unacceptable AI use.
- Policy gaps persist: 48% say their institution has clear campus-wide AI guidance; only 35% say their department does.
- Readiness is low: 59% say their institution is not well prepared to use GenAI to prepare students. 68% say faculty have not been adequately prepared. 67% say non-faculty are also unprepared.
Longer-term outlook
- 49% expect GenAI's impact on students' future careers to be more negative than positive; 20% see the opposite.
- 62% believe student learning outcomes will worsen over the next five years.
- 54% expect GenAI to have a more negative than positive impact on students' overall lives at their institution.
What this means for your campus
Here's the signal in the noise: students will use AI. Your task is to make sure it builds capacity instead of replacing it. The fastest way to do that is to align policy, assessment, and practice.
- Codify AI use: Create department-level policies that define permitted, conditional, and prohibited uses. Use concrete examples for assignments.
- Assess what AI cannot easily fake: Oral defenses, process portfolios, staged drafts with citations, in-class problem solving, and practical demonstrations.
- Teach AI literacy: Bias, hallucinations, citing AI responsibly, privacy, and data security. Make ethical use part of grading criteria.
- Require transparent workflows: Prompt logs, model versions, and student reflection on where AI helped or hurt their thinking.
- Redesign research tasks: Emphasize evaluation of sources, replication of methods, and synthesis across conflicting evidence.
- Protect academic integrity: Combine honor codes, clear sanctions, and redesigns that reduce zero-effort submissions. Use detection sparingly and never as the only proof.
- Upskill faculty quickly: Offer short clinics on prompt writing, grading with AI, and building AI-aware rubrics. Pair early adopters with departments that want help.
- Prepare students for work: Teach job-relevant AI workflows in your field, plus limits, audit trails, and human-in-the-loop checks.
- Address costs and externalities: Discuss environmental impact, data provenance, and platform privacy settings before adoption.
- Measure what changes: Track cheating reports, dropout in attention-heavy courses, and the quality of student research over time.
About the study
This non-scientific survey was conducted between October 29 and November 26, 2025, using a list of college and university faculty members developed by AAC&U and Elon University. The 1,057 respondents represent a range of disciplines, institution types, and roles. Findings are informative but not generalizable to all college faculty. Full methodology details and topline findings are included in the report.
About AAC&U
The American Association of Colleges and Universities is a global membership organization advancing the democratic purposes of higher education by promoting equity, innovation, and excellence in liberal education. Learn more at aacu.org.
About Elon University's Imagining the Digital Future Center
Imagining the Digital Future is an interdisciplinary research center focused on the human impact of accelerating digital change and the sociotechnical challenges ahead. Established in 2000 and renamed in 2024, it is funded and operated by Elon University.
Practical next step
If your department needs structured AI literacy and workflow training for faculty or students, explore curated options by job role here: Complete AI Training. Build a shared baseline fast, then iterate on policy and assessment with evidence, not guesswork.
Your membership also unlocks: