Preserving learning in the age of AI shortcuts
AI isn't going away. The question for educators is simple: How do we keep students thinking while AI makes tasks easier?
In a recent conversation on "Harvard Thinking," three experts - Michael Brenner (Applied Mathematics), Tina Grotzer (Cognitive Science), and Ying Xu (Education) - laid out a practical path forward. The theme: use AI to raise the bar, not lower it.
What's actually at risk
Learning has two lanes: what students know, and their capacity to learn - planning, self-regulation, critical and creative thinking. Offloading too much to AI threatens the second lane.
In a survey Xu ran with 7,000 high school students, nearly half felt they were relying on AI more than they wanted. Over 40 percent tried to cut back and couldn't. That's a self-regulation problem, not a technology problem.
Want a fast primer on why self-regulation matters? APA: Self-regulated learning.
Age and tool choice matter
- Specialized tools: Phonics, math, science apps - useful with clear goals and teacher oversight.
- Exploration tools for kids: Safer, bounded play is fine. Avoid anthropomorphizing - "it's not your friend."
- General assistants (chatbots): Powerful, but they demand mature planning and restraint. Many students don't have that yet.
Raise the ceiling: redesign the work
Brenner discovered a chatbot could solve his graduate problem set. Instead of banning AI, he scrapped the syllabus.
- Each week, students designed a problem a top chatbot couldn't solve, validated the solution numerically, and convinced a peer it was correct.
- Finals became oral exams at the board: explain, solve, justify.
Results: more reading, deeper understanding, and students operating at the edge of their competence.
On the other side, Grotzer saw 60-page AI-fed assignments that read like filler. Better uses she observed: quizzing themselves, targeted feedback on weak sections, and perspective prompts ("What would confuse a parent? A fifth grader?"). Same tool, very different outcomes.
Teach metacognition on purpose
Students should be able to answer two questions: "What is my mind great at?" and "What should I offload to AI?" Grotzer has students map a big Venn diagram - human strengths vs. AI capabilities - and revisit it often.
That clarity prevents lazy outsourcing and guides smart delegation.
Keep the human core of learning
Great tutors do more than deliver information. They read motivation, withhold answers at the right moments, and calibrate support so students leave with a win and the desire to try again.
Xu's research found AI and human feedback can lead to similar recall on some tasks, but students enjoy and trust human guidance more. In one class test, identical feedback was rated more useful when students believed it came from their instructor. Care and relationship amplify learning.
System-level implication: schedule and class size should let teachers actually know students. That social data - goals, confidence, frustration - is instructional gold.
Guardrails that protect thinking (without banning tools)
- Define "allowed use." Brainstorming, outlines, checks, examples = OK. Full drafts, final solutions, and proofs = not OK.
- Require process evidence. Submit prompts, drafts, source notes, and a 150-word reflection on what AI did vs. what you did.
- Use oral spot-checks. Five-minute viva on any submitted work. Randomize who gets called.
- Design "AI-resistant" tasks. Local data, live constraints, multi-step reasoning, novel combinations, and peer persuasion.
- Time-box the tool. Example: "You get 10 minutes of AI help. Plan first. Log what you used it for."
- Neutralize temptation. Build "plan-first" routines and checklists that precede any AI queries.
A quick redesign playbook
- Assignments: Move from "solve my problem set" to "create a problem the model can't solve." Add verification and peer review.
- Writing: Allow AI for brainstorming and structure; require original synthesis, citations, and a voice check via brief oral defense.
- STEM: Pair AI-assisted coding with code tracing, error diagnosis, and whiteboard explanations.
- Assessment: Blend low-stakes frequent checks, oral exams, and transfer tasks that require adapting ideas to new contexts.
- Metacognition: Weekly reflection: Where did AI help? Where did it hurt? What will you do differently next time?
What to measure
- Transfer: Can students apply concepts to fresh, messy problems?
- Reasoning quality: Fewer hallucinations, better error-spotting, stronger explanations.
- Self-regulation: Plans before prompts, adherence to time-boxes, reduced overreliance.
- Engagement: Attendance, voluntary participation, on-time work with fewer last-minute AI dumps.
- Oral performance: Clarity, confidence, and ability to justify choices under light pressure.
For parents: keep AI in context
- Think ecosystem, not apps. Relationships, sleep, outdoor time, and hobbies matter as much as any tool.
- Coach planning. Ask, "What's your plan? Where might AI help? What will you do yourself first?"
- Write a simple family agreement. What's OK, what's off-limits, how you'll review usage together.
- Normalize struggle. Remind kids that productive effort builds skill; shortcuts don't.
The mindset shift
Ignore AI and students will use it anyway - without guidance. Ban it and you cap opportunity. Treat it like a power tool: teach safety, scope, and skill.
Raise expectations. Make the work harder and more human. Measure whether learning actually improves. Then keep what works.
Next step
- AI Learning Path for Teachers - practical ways to integrate AI into planning, feedback, and assessment while protecting critical thinking.
Your membership also unlocks: