AI in Schools: Be Humble, Run Small Experiments, Measure Everything
Every few years, a new technology promises to fix education. Film strips were supposed to replace textbooks. Early internet access was supposed to define national progress. Neither lived up to the hype.
The pattern is clear: tools don't transform learning on their own. Communities do. Technology only works as well as the culture, policies, and daily routines that guide its use.
What history tells us
Being first rarely leads to better student outcomes. The first districts that welcomed phones into classrooms didn't gain an edge. Countries that rushed to wire schools didn't pull ahead in academic performance or well-being.
Opening an app is easy. Building practices that stick is hard. That work takes years of iteration with teachers, students, and families.
We've been wrong before about digital skills
For years, schools taught students to judge websites by surface cues: domain names, "About" pages, formatting, and citations. Then research showed experts do the opposite. They leave the page, scan what other sources say, and only then decide if a site deserves attention.
That method-lateral reading-proved faster and more accurate than checklists like CRAAP. If you teach research skills, study lateral reading from the Stanford community's civic online reasoning work: Civic Online Reasoning.
AI is an arrival technology
AI isn't like a smartboard you choose to buy. It shows up in students' lives and rearranges classroom realities. Teachers feel the pressure to respond, but they shouldn't be left to figure it out alone.
The right response isn't blanket bans or blind adoption. It's structured experimentation, shared learning, and clear guardrails.
A practical playbook for this year
- 1) Humility: Tell students and staff that current guidance is a best guess. What you teach this year may evolve. Model a stance of revision over certainty.
- 2) Thoughtful experiments: Pick where AI belongs and where it doesn't-yet. Electives or project-based courses often benefit from broad use. Example: a filmmaking teacher who lets students use AI for ideation, scripting, and technical problem-solving while keeping the creative work central. Contrast that with ninth grade English, where you might set tighter rules to build core writing habits first.
- 3) Local assessment: Don't wait for decade-long studies. Collect baseline student work from pre-AI years (e.g., 2022 lab reports). Pilot an AI-supported practice (e.g., formative feedback on lab write-ups). Compare outcomes you care about-clarity, accuracy, reasoning, citation quality-then keep, tweak, or cut the practice.
Clear classroom norms to set now
- Disclosure: Students label where and how they used AI.
- Attribution: If AI generates text, images, or code, students cite the tool and prompt.
- Data privacy: No personal or sensitive information in prompts.
- Original thinking: AI can draft or suggest, but students must show their reasoning and process.
- Tool access: Define which tools are allowed for which tasks and which stages (brainstorming, outlining, editing).
What success looks like
- Students write more clearly and argue with evidence.
- Feedback cycles speed up without losing substance.
- Teachers spend less time on low-value prep and more on coaching.
- Academic integrity improves because expectations are explicit and assessed.
Two plausible futures
By 2035, AI in schools could look like the web: useful, imperfect, and integrated with safeguards. Or it could resemble phones: distracting enough to limit heavily. The difference will come from what educators test, document, and share-starting now.
Quick start for your department
- Choose one course unit to pilot AI support.
- Set norms for disclosure, attribution, and tool scope.
- Collect last year's work as a baseline.
- Run the pilot for 4-6 weeks.
- Hold a review meeting with samples and findings. Decide: keep, adjust, or stop.
You don't need to be first. You do need to be right for your context. Small tests, honest review, shared evidence-that's the path.
If your district wants a structured way to upskill staff by role, see curated options here: AI courses by job.