Learning in an AI World: Practical Playbooks for Schools and Universities
AI is already in classrooms. The useful question is not "should we allow it?" but "how do we use it without losing the point of learning?"
Here's what K-12 leaders and university faculty are doing right now, what's breaking, and what you can copy by Monday.
K-12: "AI-assisted, never AI-led"
Winnipeg School Division set a clear, simple anchor: don't put personal, confidential, or sensitive information into AI tools. Everything else is guided by one principle - AI-assisted, never AI-led.
Instead of bans, the division is teaching responsible use. Leaders asked staff and families for input, hosted open discussions, and are supporting teachers to make lessons more equitable and engaging.
In practice: Russell Miller (Grade 4, Greenway School) uses Google Gemini as a translation aid and planning assistant. He builds vocabulary charts with English, Arabic, and phonetics, and last year helped students create "Bill Nye-style" music about anatomy using Suno.
Becca Koenig (student teacher, Earl Grey School) uses ChatGPT to break creative ruts. A simple prompt for a flight unit sparked an art project with the aurora borealis and a deeper look at Indigenous perspectives. The division's framework asks staff to ground every AI use and ask whether it serves reconciliation and human flourishing.
K-12: What works (copy this)
- Privacy first: No student or staff identifiers in prompts. Strip details or use synthetic data.
- Use AI upstream, not downstream: Brainstorming, translation, differentiation, exemplars, and rubrics - yes. Generating student answers or grading - no.
- Require learning artifacts: Notes, outlines, drafts, and short reflections on "what changed after AI?"
- Center equity and culture: Use AI to bring home languages into the room and to surface local, community-grounded examples.
- Invest in training: Microcredentials, peer demos, and short challenge-based workshops.
If your team needs role-specific upskilling paths, see AI courses by job at Complete AI Training.
Higher Ed: Cheating pressure is real - the fix is design, not policing
Faculty report more students using AI to write essays and code. Detection is shaky and time-consuming. Large language models produce "hallucinations," including fabricated citations and invented facts. Coverage from Nature highlights how often chatbots make up references.
Jenna Tichon (U of M Faculty Association) flags the deeper cost: we risk losing core cognitive skills - summarizing, analyzing, and that early spark of creativity - if students outsource too much thinking. One policy won't fit every discipline.
Roisin Cossar (History, U of M) found many students who used AI were overwhelmed or didn't know how to approach scholarly reading. Shifting the course design helps: in-class reading with physical annotation, hands-on projects, and assignments that force contact with primary sources.
Computer Science associate head John Braico takes a tiered approach: discourage AI in early courses; allow it in advanced work with responsibility for quality. The goal is simple - graduates who can think, critique, and ship reliable code, with or without AI.
Assessment redesign that reduces AI misuse
- Break big assignments into checkpoints: proposal → outline → draft → revision, with quick feedback at each step.
- Make thinking visible: in-class annotation, problem walkthroughs at the board, or short oral defenses.
- Localize prompts: tie tasks to class-only materials, field data, lab results, or community partners.
- Use primary sources and artifacts: transcription, data cleaning, and build-with-your-hands projects.
- Responsible use in advanced courses: "Use AI if you wish; you own the output's accuracy." Require a brief AI disclosure note.
Policy essentials you can ship this term
- Data rule: no sensitive data in AI tools. Define "sensitive" with examples.
- Permitted uses: brainstorming, translation, scaffolding, and feedback summaries. What's out: generating final work, grading, or bypassing assigned readings.
- Disclosure: students and staff note where AI helped and how outputs were verified.
- Quality ownership: the human is accountable for accuracy, citations, and ethics.
- Misconduct process: focus on learning recovery: a redo plan, skill-building support, and clear consequences.
- Community input: invite families, staff, and students to short forums; revise each term.
Support students before they default to AI
Many students aren't trying to cheat; they're stuck. They don't know how to read scholarly work, take notes, or break a task down. Build the ladder, then hold them to the standard.
- Give reading guides with questions and key terms.
- Require annotations (pen on paper or digital) before any essay or code submission.
- Share exemplars and concise rubrics, then grade the process and the product.
- Normalize drafts, peer review, and short office-hour check-ins.
Tools educators are actually using
- Google Gemini: translation prompts, vocabulary lists, and planning outlines.
- ChatGPT: idea generation and reframing instructions for different reading levels.
- Suno: creative outputs (e.g., science songs) to encode concepts in memory.
- District PD + microcredentials: structured practice beats tool tours.
Standards and guidance worth reading
- UNESCO guidance for generative AI in education and research
- Nature: AI chatbots and fabricated citations
Mindset
Fear doesn't teach. Clarity does. Keep the rule simple: AI-assisted, never AI-led. Teach students to question outputs, verify claims, and produce evidence of their own thinking.
Start small. Pick one lesson, one policy tweak, and one training action. Review, adjust, repeat.
Your membership also unlocks: