AI Crutch or Classroom Tool? CMU CS Confronts Student Dependence and Falling Grades

At CMU, heavy AI use in CS classes is backfiring: weaker fundamentals, lower grades, fewer office hours. In response, some courses teach careful use while others ban it outright.

Categorized in: AI News Education
Published on: Feb 10, 2026
AI Crutch or Classroom Tool? CMU CS Confronts Student Dependence and Falling Grades

AI is everywhere in CS classrooms - and it's showing cracks

Professors and TAs in Carnegie Mellon's School of Computer Science are seeing students lean hard on tools like ChatGPT, Microsoft Copilot, Claude, and Google Gemini. "I definitely think there's an over-reliance on AI that I see among my peers," said Elena Li, a junior in SCS. "I don't know if addiction is the right word, but I feel like it almost is an addiction."

The results are showing up in behavior and outcomes: students dropping tougher classes, fewer office hour visits, and lower grades than they likely would have earned without AI. "How do you use AI effectively when no answer exists?" asked Dr. Michael Taylor, assistant professor of computer science. "Do you allow it? Disallow it? Encourage it? Warn against it?"

What CMU is trying

In Effective Coding with AI (15-113), Taylor works with undergrads, grads, and faculty from across CS to hunt for practical answers. He calls it "teaching an answer that does not yet exist," built with students in the loop. The pressure is real: internship and job interviews now ask, "Tell us how you use AI effectively," and, as Taylor put it, "many will not hire you if you say, 'I prefer not to use AI.'"

When AI shortcuts backfire in 15-112

To use AI well, Taylor says students first need a solid base. That's why 15-113 requires a C or above in Fundamentals of Programming and Computer Science (15-112). According to Taylor and Li (head TA), AI use surged in 15-112 starting spring 2025, and models got far better at producing correct solutions. "It can absolutely pass 112," Li said. "Not just pass it, but get 100 percent on everything."

Last January, many students used AI on every assignment, then stumbled on quizzes and exams and dropped the course. Some even described their AI dependence as an "addiction." In response, Taylor and professor David Kosbie reworked 15-112: more quiz weight, in-class homework rewrites, and a flipped classroom to model problem-solving. They didn't ban AI outright; instead, they explained where it helps and where it hurts, then let students choose.

This semester, with Kosbie and visiting instructor Lauren Sands, the course adds more mandatory recitations to increase student-TA interaction. Office-hour attendance had dipped as students chased instant replies from AI. As Taylor put it: AI can sound convincing even when it's wrong, and it often conflicts with how a human would actually reason through a problem.

A different stance in 15-122

Principles of Imperative Computation (15-122), a CS requirement, bans AI use. Co-instructor Anne Kohlbrenner says the course is about reasoning skills and proofs - not learning to work with AI. "We're trying to teach students to do that kind of reasoning on their own," she said. "You can't learn theoretical through AI."

Using a probabilistic model to flag AI-generated code, Kohlbrenner found that students likely using AI earned, on average, two letter grades lower than those who didn't. The policy aims to deter students who are on the fence while pointing them to other courses - such as 15-113 or Foundations of Software Engineering (17-313) - to learn responsible AI use.

What the broader data says

Most college students report using generative AI to get quick answers with minimal effort. They're far more likely to use AI as a learning aid when professors show them how and set clear expectations, according to the University of Southern California's Center for Generative AI and Society. See their work here: USC Center for Generative AI and Society.

Nationally, faculty feel the shift. By November 2025, 79 percent believed their teaching model would be affected, and 73 percent had faced academic integrity issues tied to generative AI, per research associated with the American Association of Colleges and Universities and Elon University. Resource hub: AAC&U: Artificial Intelligence.

A practical playbook for educators

1) Set AI intent by course type

Make the policy serve the learning goals. If the aim is proof, invariants, and reasoning, restrict or ban AI. If the aim is applied programming and software practice, allow guided use with strict process documentation.

2) Require foundations before AI

Gate AI access behind prerequisite skills (as 15-113 requires a C+ threshold from 15-112). Use a short diagnostic early in the term. State plainly: AI permissibility expands as core skills are demonstrated.

3) Shift assessment to process, not just product

Increase in-class quizzes, code tracing by hand, oral check-ins, and "rewrite from scratch" sessions. Require an AI usage log: prompts, model/version, what was accepted, what was changed, and why. Grade the log.

4) Teach AI like a tool - not a crutch

  • Use AI to plan and scaffold before coding. Ask for approach, tradeoffs, and test ideas - not just final code.
  • Build in small pieces. Generate, run tests, then ask AI to explain diffs line by line.
  • Know the modes: ChatGPT and Copilot "Ask" are conversational, while Copilot "Edit" changes code in place. Keep everything under version control for traceability.
  • Require disclosure: name the model, mode, and the specific contribution.
  • Verify with peers and TAs. Fluent prose is not proof. Human reasoning catches subtle errors fast.

5) Design AI-resilient assignments

  • Use custom datasets, course-specific APIs, or hardware constraints that require authentic engagement.
  • Emphasize multi-step reasoning and proofs tied to course invariants.
  • Randomize parameters. Ask for explanations, design rationales, and test plans.
  • Assign code reading, bug hunts, and "explain this function's contract and invariants."

6) Protect human contact

Schedule mandatory recitations, small-group clinics, and structured office hours. Teach students how to ask good questions, not just paste prompts. Normalize slower, deeper help over instant replies.

7) Address the "addiction" dynamic

Build simple guardrails: weekly self-checks on AI reliance, short AI-free problem sprints, and clear pathways to support. If dependence is high and performance is falling, escalate to advisors early.

8) Be transparent about detection

Detection is probabilistic. Use it as a conversation starter, not a verdict. Pair signals with viva voce checks, code walkthroughs, and opportunities to demonstrate understanding.

Sample policy language you can adapt

  • Permitted uses: brainstorming, outlining approaches, generating starter tests, and clarifying error messages.
  • Prohibited uses: submitting AI-generated solutions without substantial modification and understanding; using AI on take-home exams; sharing prompts that disclose restricted materials.
  • Disclosure required: list model/version, prompts used, and the exact lines accepted. Include a brief reflection on changes made and why.
  • Assessment weighting: in-class quizzes and oral checks (X%), projects with AI logs (Y%), exams (Z%).
  • Consequences: failure to disclose AI use or misrepresentation will be treated as an integrity violation.
  • Support: TA clinics, office hours, and guidance on effective AI study habits.

What to teach explicitly about tools

  • Model behavior differs by mode: conversational "Ask" vs. in-place "Edit." Keep diffs in Git and annotate what changed and why.
  • Prompt patterns that raise quality: "Explain your reasoning," "Propose edge-case tests," "Critique my approach against constraints A, B, C."
  • Trust tests over text. Every AI-suggested change must pass unit tests and a human explanation.

The bottom line

No one has the final playbook yet. But the direction is clear: teach the foundations, make process visible, keep humans in the loop, and treat AI as a tool that requires instruction and scrutiny. As Taylor put it, with a bit of structure and student collaboration, we can find practices that actually improve learning - and build meaningful work with them.

If your department needs structured material on coding with AI to support responsible teaching and student practice, see this resource: AI Certification for Coding.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)