CSULB faculty put AI to work for students, from chatbots to critical thinking

CSULB faculty won CSU awards for AI pilots that center judgment, care, and human thinking. Highlights: a teaching chatbot, TILT-based tasks, lateral reading, and a reasoning gym.

Categorized in: AI News Education
Published on: Dec 02, 2025
CSULB faculty put AI to work for students, from chatbots to critical thinking

CSULB faculty turn AI proposals into bold new paths for student learning

Published: December 1, 2025

Four Cal State Long Beach faculty members earned awards in the inaugural CSU-wide Artificial Intelligence Educational Innovations Challenge. Their projects share a clear goal: integrate AI in ways that build judgment, reduce stress, and keep human thinking at the center of learning.

Across teacher education, business, English, and health care management, these pilots show how to move beyond "answer machines" and create structured experiences that improve analysis, ethics, and reflection.

Learning to teach with a chatbot: Remi

Associate professor Heather Macías built a Beach Teach chatbot called Remi, named after the remora fish that supports larger sea life. Remi checks in on student teachers' wellbeing first, then offers research-backed guidance drawn from articles, videos, and resources curated and vetted by faculty.

Student teaching can be intense. Remi provides timely support, points users to campus resources like counseling and basic needs services, and answers classroom questions with grounded best practices. A wider roll-out is targeted for 2026.

Building AI literacy in business courses with TILT

Business lecturer Claudia Barrulas Yefremian is using the Transparency in Learning and Teaching (TILT) framework to embed AI into activities, assignments, and assessments. The focus: use AI as a tutor and thinking partner-never a shortcut.

Students report that AI helps them analyze and reflect more deeply without replacing original work. Next up: a digital book by spring 2026 for CSU instructors, with materials available in Canvas.

Rethinking research with LLMs and lateral reading

English lecturer Geri Lawson and co-PI E. Jann Harris invite students to treat large language models (LLMs) as co-researchers. Students prompt an LLM to generate four sources, then "read laterally" to verify relevance, credibility, and freshness-no blind acceptance.

They're building modules that explain strengths and limits of LLMs, emphasize ethical use, and prevent "offloading" thinking. Mini-lessons are running in English 100A, 310, 337, and 4/510 now, with Canvas modules planned by June 2026. The tools keep shifting, so the curriculum stays adaptable.

A structured space for reasoning: ThinkMate Edu

Associate professor Sara Nourazari co-designed ThinkMate Edu, a "critical-thinking gym" that turns AI into a reasoning space rather than an answer box. Students analyze complex issues, practice ethical reasoning, and reflect on their methods inside a controlled environment.

In a master's course, students write papers without AI, then use ThinkMate to critique their work across set dimensions and submit their transcript of interactions. The structure encourages experimentation with clear guardrails. Faculty across departments are already asking to pilot it.

What educators can borrow from these pilots

  • Start with care: check in on student wellbeing and provide resource pathways before tackling content.
  • Curate the knowledge base: seed AI tools with vetted sources so guidance aligns with program standards.
  • Teach verification: make lateral reading a default habit for any AI-suggested source.
  • Reward the process: require prompts, transcripts, and reflections so you can assess thinking, not just outputs.
  • Be transparent: align activities with TILT practices so students understand purpose, tasks, and criteria.
  • Use your LMS: package modules in Canvas for easy reuse and sharing across sections.
  • Set ethical norms: define acceptable AI use, cite AI assistance, and make misuse consequences clear.
  • Iterate in public: pilot, gather feedback, and adjust-tools will change, your principles won't.

Quick ideas you can use next term

  • Add a "wellbeing first" prompt to any AI assistant or chatbot in your course and include referral scripts for support services.
  • Create a lateral-reading assignment: have students get four sources via an LLM, then verify each through independent checks and explain keep/drop decisions.
  • Require an AI-use statement with artifacts: prompts, settings, transcripts, and a 150-word reflection on how AI influenced the work.
  • Design rubrics that credit reasoning quality, evidence checks, and ethical use alongside final products.
  • Host a peer mini-workshop where students critique AI-generated suggestions against course rubrics and research standards.

Further learning

Bottom line: keep students thinking. Use AI to prompt better questions, clearer reasoning, and stronger judgment-then assess the process as much as the product.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide