Teaching AI without losing the learner: Willamette's high-touch model in action
Willamette University Associate Professor of Exercise and Health Science Brandi Row Lazzarini BA'96 approached artificial intelligence first as a learner. While working on a research project in 2023, as ChatGPT went mainstream, she used AI to level up her programming skills for analyzing biomechanics data.
The result wasn't just better code. It sparked a bigger question: how can AI support learning without getting in the way of it? She began collecting perspectives from across the spectrum and brought that inquiry into her courses.
Her core goal is straightforward: help students build the self-awareness and critical thinking they need to use AI effectively and responsibly. Then give them room to apply those skills on projects that actually matter to them - whether they're future physical therapists drafting patient education materials or first-year students probing how AI might influence their own career search.
A methodical classroom playbook
In last fall's first-year colloquium, "Generative AI and the Learning Experience," students started with a close look at the science of learning and how AI can affect it. From there, they wrote analytical essays on learning-related concepts using a structured process that put human thinking first.
- Survey with AI: Students used AI to scan the academic terrain and identify which sources deserved deep reading.
- Switch to human review: They traded peer feedback, strengthened arguments, and refined structure - manually.
- Use AI late and lightly: Only after real revision did they ask AI for a final review. By then, many noticed the bot added little. Their engaged thinking had already gone beyond the tool's surface-level feedback.
Students felt the work was both timely and useful. "I don't think it's possible to fully ignore the reality of artificial intelligence's continued development, which is why classes like this AI colloquium are so important to take," said Sonali DeSilva-Craycroft BA'29.
Why a high-touch environment matters
For Row Lazzarini, Willamette's small, hands-on classes are the right setting to ensure the learner - not the tool - is the one growing. Frequent interaction, real accountability, and authentic projects reduce the temptation to use AI as a shortcut.
Students choose topics that mean something to them, apply course skills, and justify how their projects can make an impact. That autonomy builds ownership. It also makes it easier to see where AI helps - and where it gets in the way.
Projects that stick
The colloquium concluded with a collaborative project on AI's real-world effects. Each team applied AI in a way that fit its focus and values.
- One group analyzed how commercial artists could be harmed by AI image tools and explored responses.
- Another built a website to help college students understand risks tied to AI therapy tools.
- Others looked at how AI will influence their career search or the ways businesses operate.
"The group project was the highlight of the class to me," said Major LeProwse BA'29. "Our topic focused on how artificial intelligence is actively changing the way businesses operate. We wrote a proposal and then created a website to present our research." Their site included a career table estimating how likely different jobs are to be automated - a clear example of students using AI to study the very trends that will affect their futures.
Practical takeaways for educators
- Set the intent up front: define what you want students to learn that AI cannot do for them (e.g., judgment, synthesis, ethical reasoning).
- Place AI at specific points in the workflow: early for scoping and idea generation; late for proofreading or counterargument prompts - not as a replacement for reading and thinking.
- Prioritize human interaction: require peer review, in-class debates, and oral defenses to surface original thinking.
- Grade the process, not just the product: ask for reading logs, prompt histories, and reflection on how AI was used and why.
- Assign meaningful problems: let students choose topics with personal or professional relevance, then justify real-world impact.
- Teach limits and risks: address bias, hallucinations, privacy, copyright, and domain-specific concerns (e.g., patient safety, academic integrity).
What this model shows
AI can accelerate the "front-end" work - scanning sources, mapping options, clarifying angles. But deep learning still comes from the messy middle: reading closely, testing arguments, and iterating with other humans.
With clear guardrails and high expectations, students learn to use AI as a thinking partner, not a crutch. That's how they outpace the bot - and build durable skills they can carry into any field.
Further reading
Want a ready-to-go path for faculty or student upskilling?
If you're building structured AI skill paths for your department or cohort, explore curated options by role at Complete AI Training. Use them to support the same methodical, human-first approach outlined above.
Your membership also unlocks: