Harvard's New Playbook for Teaching with AI
AI now sits beside word processors and spreadsheets in the academic toolkit. The question isn't "allow or ban," but "where, how, and why." Recent conversations across Harvard point to a clear direction: use AI to deepen learning, keep humans in the loop for judgment, and set transparent rules.
Rethinking Assignments at the Kennedy School
Teddy Svoronos applies a traffic-light policy for AI use: green (use freely), yellow (use with limits), and red (no use when it would undermine the learning objective). Red often precedes in-class debates where students must build and defend arguments themselves. Green invites experimentation and structured reflection.
Reflection is part of the assignment. Students answer: What did AI help with? What did it miss? What did you learn by using it? This shifts AI from shortcut to study partner.
He also pilots AI-facilitated oral exams. Students enter a Socratic dialogue with a conversational AI trained on course material. The AI doesn't grade; instructors assess transcripts. The goal: new practice formats, same human judgment.
Another sequence pairs current skills with stretch goals. First, students produce a data visualization in Excel. Then they use AI to generate an advanced version. Finally, they audit the AI output: What's accurate? What's fabricated? What needs verification before public or academic use?
Students help write the course AI policy. That joint authorship builds transparency, aligns expectations, and boosts ownership of learning outcomes.
AI as a "Thinking Partner" at Harvard Medical School
Tari Tan's students annotate their own lesson materials, submit them to ChatGPT, then compare. They evaluate prompt quality, bias, and alignment with goals. The takeaway is precise: AI can mimic fluency, but it doesn't do human reasoning.
Metacognition is the point. When students explain how they used AI, they can tell if they're learning-or outsourcing thought. This adds cognitive load, and sometimes the tool becomes a distraction; that tension is part of the lesson.
Grades focus on reflection and meaningful use, not the AI's output. Over time, students write better prompts and critique results more sharply. They spot hallucinations, jargon that hides weak logic, and plausible-sounding fluff. Good teaching stands; uncritical routines don't.
Executive Education at HBS: From Concepts to Operations
Harvard Business School is testing AI where it counts-workflow. One custom GPT turns long-form course feedback into visual summaries and actionable insights. It's a warm handoff to faculty, not a replacement for judgment.
To speed faculty adoption, teams built scaffolded prompt libraries-starter phrasing that lowers the learning curve and standardizes quality. This helps instructors get consistent, useful outputs without guesswork.
For asynchronous experiences, the team tested AI-generated avatars to populate virtual classrooms. Early versions felt too artificial, but newer platforms like HeyGen show progress. The goal is simple: help remote learners feel present.
Data privacy is non-negotiable. HBS requires tools that don't train on user data, run in a walled garden, meet strict contracts, and allow safe uploads of cases without personally identifiable or financial data.
What This Means for Your Course
- Publish a traffic-light policy for AI with clear examples of green, yellow, and red tasks.
- Grade reflections and process, not AI text. Ask students to document prompts, versions, and decisions.
- Pair human-first tasks with AI stretch tasks, then require an audit of AI outputs for accuracy and sources.
- Use short AI oral checks to surface reasoning gaps; keep grading human.
- Co-author your AI policy with students to boost buy-in and clarify boundaries.
- Create a prompt library for your course so faculty and students start strong and stay consistent.
- Run targeted pilots (e.g., feedback summarization) before scaling across courses.
- Set privacy guardrails: no training on user data, walled gardens, and strict data-sharing rules.
- Teach students to spot hallucinations, jargon masking weak logic, and missing citations.
- Avoid blanket bans. Clear norms reduce confusion and prevent underground use.
If you need a framework for risk and privacy, see the NIST AI Risk Management Framework. For examples and updates on building custom GPTs for teaching and support, explore practical case studies and tools.
The emerging playbook is straightforward: define where AI fits, make students think about how they use it, keep humans in evaluation, train faculty, and protect data. Do that, and AI becomes an amplifier for deep learning-not a shortcut.
Your membership also unlocks: