How The College's AI Academy Brings AI Into Operations, Teaching, and Research
The College of Liberal Arts and Sciences at Arizona State University launched the AI Academy to raise AI literacy and ship practical initiatives across research, teaching, and operations. Led by Kyle Jensen, assistant dean for AI and emerging digital technologies, the academy meets monthly and builds momentum through structured, collaborative work.
The goal is simple: connect people who are experimenting with AI, align on shared frameworks, and turn scattered pilots into sustainable systems that improve outcomes at scale.
Why operations teams should care
AI projects rarely fail on tech. They fail on process, governance, and change management. The academy closes those gaps by giving departments a common language, repeatable templates, and a cadence for iteration.
As AI specialist Jonathan McMichael put it, a static "write a plan and execute it" playbook doesn't hold up. The academy uses an adaptive structure so leaders can make sense of change together and ship improvements step by step.
How the academy works
- Monthly sessions that stack: each meeting informs the next and builds shared resources.
- Cross-discipline perspectives: humanities, natural sciences, and social sciences surface repeatable patterns.
- Action plans over one-off tools: initiatives are framed around outcomes for ASU's community and clear on-ramps for others to adopt.
Operational playbook you can copy
- Set a monthly cadence: updates, demos, blockers, and decisions in one room. Publish notes and next steps the same day.
- Define use-case tiers: quick wins (automation, feedback tools), managed pilots (course support, research workflows), and strategic bets (infrastructure, data integrations).
- Create lightweight governance: purpose, data sources, privacy, human-in-the-loop, evaluation criteria, and rollback plan on one page.
- Standardize evaluation: accuracy, bias checks, time saved, learner/researcher satisfaction, and cost per outcome.
- Centralize assets: prompt libraries, workflow diagrams, how-to videos, and rubrics in a shared repository.
- Close the loop: every pilot reports impact, risks, and "copy this" guidance for the next team.
Use cases with clear ops impact
- Teaching support (Psychology): Éva Szeli is building an AI partner for lab work with built-in resources and guardrails. Ops takeaway: define guardrails early, embed resources in the tool, and document faculty/student training so adoption doesn't stall.
- Structured feedback (Human Communication): Jen Eden's team built a simple AI that turns hidden steps like brainstorming into explicit, coachable moments for COM 100 students. Ops takeaway: target invisible processes, make thinking visible, and measure depth of engagement rather than just completion.
- Reusable course tools (Philosophy): David McElhoes developed custom AI tutors, mock debaters, and interactive study guides, plus a repository colleagues can reuse. Ops takeaway: invest in shareable templates and distribution so wins spread beyond a single course.
Governance and risk that won't slow you down
- Adopt a simple intake form: data types, model access, storage location, student privacy, and human review points.
- Use a common rubric for AI feedback: transparency to students, accuracy checks, and sources cited.
- Align with an external standard such as the NIST AI Risk Management Framework to keep language consistent across departments.
Metrics that matter
- Cycle time: from idea to approved pilot to wider rollout.
- Adoption: percent of courses or units using a vetted AI workflow.
- Quality: rubric-aligned learning gains, faculty satisfaction, research throughput.
- Cost and time saved: hours reclaimed per role, budget reallocated to higher-value work.
- Risk flags: number of privacy, bias, or integrity issues caught before rollout.
30/60/90-day blueprint
- Day 0-30: Stand up the cadence, pick three low-risk use cases, publish your one-page governance, and create a shared repository.
- Day 31-60: Run pilots with clear success metrics and human review. Document prompts, workflows, and training steps.
- Day 61-90: Evaluate impact, retire what didn't work, scale what did, and share templates so other units can copy with minimal lift.
What makes this model work
Kyle Jensen emphasized the academy's strength: sustained collaboration over an academic year with diverse expertise in the room. That time horizon allows ideas to be tested, refined, and operationalized, rather than getting stuck at the "interesting demo" stage.
McMichael highlighted a second advantage: breadth across disciplines reveals patterns. Strategies that work in one context often translate with small tweaks, which accelerates rollout and reduces reinvention.
What's next
The College AI Academy will host its second cohort at the start of the next school year, adding new initiatives to meet emerging needs. For operations leaders, this is the moment to formalize your intake, measurement, and rollout processes so AI work scales without creating chaos.
If you're building similar systems, this resource can help with playbooks and training: AI for Operations.
Quick checklist for your next meeting
- Do we have a one-page governance for every pilot?
- What's the measurable outcome and review cadence?
- Where will prompts, workflows, and rubrics live for others to reuse?
- Who owns scale-up if the pilot hits its target?
- How are we communicating wins and risks across departments?
Your membership also unlocks: