Temple investigates AI to make university operations leaner and more effective
Temple University is evaluating where artificial intelligence can increase revenue, reduce administrative friction and improve student services. A new task force in its Forward with Purpose plan is surveying colleges to map high-impact use cases and set priorities.
"We're just putting together teams of people now to look at, how might we employ artificial intelligence to make our operations more efficient?" said Interim Vice Provost David Boardman. "It doesn't necessarily mean that fewer people would be employed here, but it should really improve how we deploy the people we have and making sure, you know, they're being used in the most impactful way."
What's already live
AI is not new to campus. The Department of Public Safety installed ZeroEyes gun detection software in November 2024, using computer vision to flag firearms in live video feeds for faster response.
Institutional Advancement launched the Isabel Tower Virtual Engagement Officer in October 2025 to send curated donor emails at scale. The School of Sport, Tourism and Hospitality Management is developing JournAI, an AI mentorship app for student-athletes planned for Spring 2027, built on large language models.
Student use is already mainstream
Students are using tools like ChatGPT and Google Gemini to brainstorm, research and get coursework help. In an October 2025 poll of 86 students, 65% reported some AI use: 62% for brainstorming, 44% for information lookups and 38% for coursework assistance.
Temple currently prohibits generative AI by default unless a professor grants permission. That gap between policy and everyday practice signals a need for clearer guidance, better training and secure access to approved tools.
Compute and cost: the big decision
Scaling AI isn't free. "It would need a huge investment in computational architecture, particularly machines with [Graphical Processing Units] or an investment in third-party servers," said Rob Kulathinal, associate director of the Institute for Genomic and Evolutionary Medicine and co-organizer of the data science and AI network.
Operations teams will need to weigh build vs. buy and plan for ongoing costs, not just pilots. Below are the core cost levers and decisions to line up early.
- Model access: Enterprise LLM subscriptions vs. open-source models hosted on-prem or in VPC.
- Compute: GPUs on campus vs. cloud inference; capacity planning for peak loads.
- Data layer: Secure storage, vector databases, retrieval pipelines and PII redaction.
- Integration: Connectors to SIS, CRM, LMS, ticketing and identity systems.
- Security & compliance: Data governance, audit logging, model usage policies and red-teaming.
- Change management: Training, support and communications to drive adoption.
Why this matters for operations
Higher ed peers are already using AI in admissions, advising and student success. A review by Harvard Social Impact Review outlines early wins from chatbots and automation in university services, signaling where quick value tends to appear.
Policy, skills and readiness
Dana Dawson, associate director for teaching and learning at the Center for the Advancement of Teaching, urges more education and clear rules. "If we ignore it⦠students find themselves under prepared to be competitive when they leave the university."
CAT has published resources, including a faculty guide to AI. For operations, that means aligning policy, training and procurement so faculty, staff and students know what's allowed, what's secure and how to get support.
High-ROI pilots for the next 6-12 months
- Admissions and financial aid Q&A: 24/7 chat that resolves common questions and triages complex cases.
- Advising triage and summaries: Intake forms + AI-generated case summaries routed to advisors.
- Donor engagement: AI-written drafts with human approval; segment donors by likelihood to give.
- IT and registrar support: AI answers from a vetted knowledge base; ticket summarization for agents.
- Classroom and space scheduling: Predict demand, reduce no-shows, suggest optimal rooming.
- Safety signal processing: Expand responsible use of computer vision and anomaly detection with strict oversight.
- Grant scouting: Match faculty interests to new opportunities; auto-draft compliance sections.
Metrics that matter
- Service efficiency: First-response time, resolution time, call/chat deflection rate, cost per interaction.
- Academic services: Advisor caseload capacity, time to appointment, student satisfaction (CSAT).
- Advancement: Open/click rates, meeting set rate, conversion to gift, cost per dollar raised.
- Safety: True/false positive rates, time-to-alert, incident escalation accuracy.
- Quality & risk: Hallucination rate on sampled outputs, privacy incidents, model drift indicators.
Operating model recommendations
- Create a central AI program office that sets standards, negotiates contracts and runs shared platforms.
- Use a federated model with "AI champions" inside colleges and administrative units to localize adoption.
- Stand up a model catalog with approved providers, use cases and data classifications.
- Adopt enterprise LLM access (with privacy guarantees) for staff and faculty; log prompts and outputs for auditing.
- Secure data pipelines for retrieval-augmented generation so models only answer from vetted sources.
- Governance: A cross-functional review board for ethics, accessibility, bias and legal.
- Training: Role-based microlearning for staff, faculty and student workers; office hours and sandboxes.
- Procurement: Centralize vendor evaluations, red-team before purchase and require SOC 2/FERPA-aligned controls.
Licensing and access
Kulathinal notes the value of pro-level LLMs for both performance and security. Central licenses can reduce shadow IT, cut costs and enforce data policies. Prioritize seats for student-facing units, advancement, safety, IT and registrars.
For practical playbooks and courseware on building AI into day-to-day operations, see AI for Operations.
Budget sketch (year 1)
- LLM seats: Enterprise licenses for staff/faculty (tiered by role).
- Cloud inference or GPU lease: Reserve capacity for peak seasons (admissions, registration, giving days).
- Integration & data work: Connectors, RAG pipelines, redaction, identity and access.
- Pilot build and evaluation: 3-5 high-ROI pilots with A/B testing and guardrails.
- Change management: Training, documentation, communications and support.
Suggested timeline
- 0-30 days: Complete university-wide survey; define 5-7 priority use cases; establish governance.
- 30-90 days: Launch pilots in admissions, advising and advancement; deploy enterprise LLM access.
- 90-180 days: Evaluate ROI and quality; expand to IT support and scheduling; tune policies.
- 6-12 months: Scale winners; formalize training; publish transparent metrics and guidelines for students.
Bottom line
The demand signal is clear and the tools are ready, but success depends on responsible rollout. Clear policies, secure infrastructure and measurable outcomes will keep AI from becoming another point solution and turn it into operational leverage across the university.
As Dawson put it, ignoring AI leaves students underprepared. A focused, governed program can support staff, improve student experience and grow revenue-without compromising privacy or academic integrity.
Your membership also unlocks: