AI on Campus: Practical Lessons from Brooklyn College for Educators
Artificial intelligence moved from science fiction to classroom reality in months, not decades. The shift has educators asking better questions: What should we teach now? Where does AI belong in learning? And how do we keep core academic values intact while students use new tools?
Three Brooklyn College faculty members-Martha Nadell (English), MJ Robinson (Television, and Radio & Emerging Media), and Karen Stern-Gabbay (History, Roberta S. Matthews Center for Teaching and Learning)-offer a clear snapshot of what's working and what to change next.
The first wave: panic, hype, and denial
When ChatGPT hit the news in late 2022, reactions split. Some predicted takeover, others wanted direct integration, and a portion tried to ignore it. That spread is normal for a disruptive tool, but it doesn't help students make sense of what to do right now.
What AI work looks like in class
Early student submissions generated by AI were easy to spot. The prose felt average: generic phrasing, predictable structure, and thin ideas. It projected confidence without substance, which is the exact opposite of what we want students to practice.
In journalism courses, the tension is sharper. As MJ Robinson notes, students are writing the "first rough draft of history," while the tech can now "write it with" them. That forces a new discipline: analyze how the press covers AI, study how AI changes newsrooms, and define where human judgment stays non-negotiable.
What college should teach now
Karen Stern-Gabbay points out that students arrive with uneven preparation for working with AI. College is where we set expectations, rules, and shared language for responsible use. It's also where we double down on human skills: critical thinking, ethical reasoning, empathy, and analog practices that ground digital work.
Nadell adds a simple truth: universities excel at training students to spot limits. AI is good at predicting common patterns of language and thought. That is useful, but it does not replace originality, context, or care for consequences.
Why educator AI literacy matters
Very soon, K-12 graduates will have learned with AI from a young age. Higher ed must meet them with clarity. Ask why you're using AI for a task before you do, keep humans in the loop, and know what you don't know about these systems.
A helpful frame for policy and practice is the NIST AI Risk Management Framework, which centers on transparency, accountability, and documented risk tradeoffs. See the overview from NIST here: AI Risk Management Framework.
What Brooklyn College is doing
The Roberta S. Matthews Center for Teaching and Learning has convened workshops on AI in the classroom. The headline issues: academic integrity, data privacy, and the environmental cost of large-scale computing. Those discussions also reinforce why the classroom is more valuable than ever: it's a space to build judgment, not just output.
A practical playbook you can adapt this semester
- Clarify your AI policy. Where is AI encouraged, restricted, or prohibited? State expectations for disclosure and citation of AI assistance.
- Redesign assignments for thinking, not output. Use process logs, oral defenses, in-class drafting, and source audits to surface how students think.
- Teach the "why" before the "how." Students should justify AI use for a task and name risks (accuracy, bias, privacy) before touching a tool.
- Keep human-in-the-loop checkpoints. Require human edits, fact verification, and reflection memos on what AI missed or distorted.
- Address data privacy up front. Explain what tools collect, where data goes, and when not to paste sensitive or personal information.
- Discuss authorship and IP. Who owns AI-augmented work in your course? What counts as original contribution?
- Make integrity concrete. Define misconduct scenarios (undisclosed drafting, fabricated citations) and show what acceptable assistance looks like.
- Surface environmental costs briefly. Acknowledge compute footprints and encourage efficient, purposeful use.
- Build a shared rubric. Assess idea quality, evidence, process, and ethical use-not just polish.
- Model uncertainty. Tell students what you're still learning and invite them to test and report back with evidence.
Signals to watch in student work
- Confident tone with vague or recycled claims.
- Uniform sentence rhythm and clichΓ©s without concrete examples.
- Invented sources or citations that almost look right.
- Weak connection to course materials or class discussion.
These are cues for conversation, not accusations. Use them to coach students toward deeper thinking and better habits.
For departments and centers for teaching and learning
- Host cross-discipline demos where faculty show one assignment that works with AI and one that tests beyond AI.
- Publish short, living guidelines. One page, updated each term, with tool lists, policies, and sample language.
- Create student-facing workshops on disclosure, fact-checking, and prompt critique.
- Coordinate with IT on approved tools, access, and privacy standards.
Bottom line
AI will sit in every student's toolset. Our job is to make sure it doesn't replace thought, care, or accountability. Do that, and students leave with judgment that outlasts any model update.
Further support
- Policy and risk baseline: NIST AI Risk Management Framework
- Curated learning paths for educators exploring AI in coursework: Complete AI Training - Courses by Job
Your membership also unlocks: