Green, Yellow, Red: Jonathan Keiser on Bringing AI into the College Classroom

AI is now about how, not if. Traffic-light policies and process-focused work teach ethical, effective use that employers expect from day one.

Categorized in: AI News Education
Published on: Dec 17, 2025
Green, Yellow, Red: Jonathan Keiser on Bringing AI into the College Classroom

AI Enablement in Higher Education: Policies, Practice, and Preparing Students for Work

Colleges aren't debating if AI belongs in the classroom anymore - they're figuring out how to use it well. Jonathan Keiser, associate vice president for AI enablement and innovation and chief academic technology officer at the University of St. Thomas, recently laid out a practical path forward in a conversation with WCCO Radio's Vineeta Sawkar.

His core message: banning AI isn't a plan. Thoughtful integration is. That starts with clear policies, purposeful assignments, and building the judgment students will need on day one in their careers.

A simple policy that actually works: the traffic light

Keiser recommends a traffic light model on every syllabus. It sets expectations in plain language and reduces confusion across sections and programs.

  • Red: No AI use allowed.
  • Yellow: Limited use (for example: brainstorming, outlining, editing, or grammar).
  • Green: AI is permitted in any capacity, with proper citation and process notes.

This isn't just about rules. It's about teaching students how to use AI with intent. When policies are explicit, instructors can design assessments that align with the policy instead of fighting against it.

Don't sidestep AI - teach it

Faculty are being asked to rethink long-standing assumptions about writing, assessment, and originality. That's healthy. Employers expect graduates to work with AI, critique outputs, and make sound decisions with imperfect information.

Keiser argues for integrating AI into coursework and, at times, requiring it. The goal is to strengthen critical thinking: verify claims, trace sources, test prompts, compare tools, and surface ethical issues like bias and privacy. The work product improves when students show their process, not just the final answer.

Programs that reflect two tracks of demand

At St. Thomas, two graduate paths address different needs. The Master of Science in Artificial Intelligence focuses on building systems - the technical track.

The Master of Arts in AI Leadership (launched this fall) centers on strategy, governance, and change management. Keiser is developing a course for this program that launches in February, aimed at helping students lead AI adoption responsibly across teams and institutions.

What this means for your campus

  • Adopt the traffic light policy in every syllabus this term. Share examples and keep a public repository faculty can copy.
  • Shift assessments from product-only to process-plus-product. Ask for prompt logs, version history, and rationale for model choices.
  • Teach AI critique as a core skill: source checking, bias identification, and fact-validation with citations.
  • Clarify citation norms for AI assistance. Require students to declare where, how, and why AI was used.
  • Make ethics real: privacy, copyright, data security, and academic integrity scenarios tied to your institution's policy.
  • Invest in faculty development: workshops by discipline with real assignments, rubrics, and grading examples.
  • Create a feedback loop: run quick pulse checks with faculty and students; iterate policies each term.

Starter assignments you can deploy this week

  • AI vs. human baseline: Students produce a first draft, then use AI to improve clarity. Submit both versions with a short reflection on what changed and why.
  • Prompt audit: In small groups, students test three prompts on two models and document variance, errors, and fixes.
  • Ethics case memo: Present a realistic classroom or workplace scenario involving AI misuse. Students propose a policy and enforcement plan.
  • Source triangulation: Students use AI to gather claims, then verify each with primary or peer-reviewed sources before submitting.

Academic integrity, simplified

  • Publish a one-page AI use guide per course (policy, allowed tools, examples).
  • Require process artifacts (prompt history, change logs) to reduce detection drama and focus on learning.
  • Align program-level expectations so policies don't whiplash students from class to class.

Faculty support that actually helps

  • Short, discipline-specific clinics: 60 minutes, one rubric, one assignment makeover.
  • Office hours for AI course design with an instructional designer and an academic integrity lead.
  • Centralized tool guidance: approved tools, data policies, and example consent language.

Governance and risk

If your institution is drafting AI guidelines, keep it practical: define approved use cases, data handling, model access, and review cycles. Map risks to clear controls and keep the document living, not static.

For departments building AI literacy

If your faculty or staff need structured upskilling, review curated AI course lists by role and skill level. Start with what maps to your learning outcomes and assessment needs.

The takeaway is straightforward: set clear rules, make AI use visible, and teach students to think harder, not less. Keiser's approach respects academic standards while meeting the workforce where it already is.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide