UW-Parkside Launches Campuswide AI Fluency Initiative by 2028: What Educators Should Know
The University of Wisconsin-Parkside is rolling out a multi-year plan to make students, faculty, and staff fluent in artificial intelligence by 2028. The effort pairs curriculum integration with campus operations improvements, supported by partnerships with companies including Microsoft and Ordify AI.
The goal is straightforward: graduates who can use AI responsibly, think critically about when it adds value, and know when human judgment must lead. For educators and administrators, that means clear policies, targeted training, and course designs that keep learning outcomes at the center.
What "AI Fluency" Means on Campus
Parkside defines AI fluency as using AI tools critically, ethically, and effectively-while being able to judge when not to use them. The university emphasizes that AI is a tool, not a replacement for learning or professional judgment.
That framing matters. It invites faculty to integrate AI where it supports instruction-analysis, feedback, ideation, accessibility-without compromising academic integrity or core disciplinary skills.
Why This Matters for Educators and Administrators
Employers are asking for graduates who can work with AI in real tasks: research, writing, data analysis, customer support, and operations. Parkside's approach ties classroom practice to those expectations and supports regional workforce needs in Southeastern Wisconsin.
For campus leaders, the initiative is a chance to standardize guidance across departments, reduce confusion for students, and avoid shadow use of tools that create risk. Clear policy, consistent training, and shared resources reduce friction and lift instructional quality.
Task Force Priorities
- Policy: Course and campus-level guidance on acceptable AI use, disclosure, attribution, and data security.
- Professional development: Hands-on training for faculty and staff; office hours and quick-start playbooks.
- Classroom integration: Examples by discipline, assignment templates, and assessment strategies that preserve learning integrity.
- Student readiness: Orientation modules on ethical use, prompt strategy, verification, and collaboration norms.
- Operations: Responsible adoption for administrative workflows (advising, communications, analytics) with clear guardrails.
What Faculty Can Do This Semester
- Set a clear AI use policy in your syllabus. Define allowed tools, contexts, disclosure requirements, and consequences. Keep it short and unambiguous.
- Redesign one assessment to show process. Ask for drafts, outlines, citations, or reflection notes that reveal student thinking.
- Teach verification. Require students to cross-check AI outputs with credible sources and label any AI-assisted sections.
- Integrate tool literacy where it helps learning. For example, compare human-written and AI-generated summaries, then critique accuracy and bias.
- Pilot a micro-assignment with Microsoft tools (e.g., Copilot for research synthesis) and capture what worked and what didn't. See practical options in Microsoft AI Courses.
- Collect quick feedback. One-minute reflections can surface confusion early and guide your next iteration.
Program Leads: How to Operationalize by 2028
- Define milestones: baseline (policy + PD), pilot (selected departments), scale (campuswide standards), sustain (continuous improvement).
- Stand up a central support hub: short guides, vetted tool list, sample assignments, workshop calendar, and help request intake.
- Adopt procurement and privacy standards: vendor risk checks, data handling rules, and bias testing expectations.
- Use common rubrics: clarity on disclosure, process evidence, source verification, and discipline-specific criteria.
- Measure impact: student competence gains, faculty adoption, course outcomes, and operational efficiency. Share wins and lessons learned each term.
Keeping Human Judgment in Charge
AI can speed low-level tasks and expand access to examples, but it can also hallucinate, overgeneralize, or flatten original thinking. Parkside's stance keeps human judgment as the final authority-appropriate for academic integrity and professional readiness.
The practical move is to teach students how to interrogate outputs, cite assistance, and make defensible decisions. That's fluency.
Helpful Resources
- AI Learning Path for Teachers - step-by-step modules for policy, classroom use, and assessment.
- AI for Education - examples, tools, and course ideas by discipline.
- Microsoft Responsible AI principles - useful for aligning campus policy and vendor reviews.
What to Watch Next
- Department-level pilots and published exemplars that others can copy.
- Updates to academic integrity and accessibility policies reflecting AI use.
- Faculty and staff training cadence, plus student onboarding to set shared expectations.
Bottom line: Parkside's initiative gives educators permission and structure to use AI where it truly helps-and to set limits where it doesn't. Start small, make the process visible, and build shared practices that hold up in the classroom and the workplace.
Your membership also unlocks: