Universities and Government Agencies Chart Different Paths on AI Adoption
Syracuse University has deployed more than 30,000 AI licenses across campus. New York State's Office of General Services is piloting AI-powered document summarization tools. Two leaders working at the center of these efforts discussed their strategies at a recent event hosted by the Maxwell School of Citizenship and Public Affairs.
Jeff Rubin, Syracuse's senior vice president for digital transformation, and Jeanette Moy, commissioner of the state's Office of General Services, outlined how their institutions are approaching AI for Education and AI for Government respectively.
Personalizing Learning at Scale
Rubin opened his remarks by framing AI's potential for higher education bluntly: the technology could transform teaching in ways unseen in 200 years. The traditional lecture model-where a professor teaches to a room while students take notes and face assessment through papers and exams-has remained largely unchanged.
AI changes that equation by enabling personalization at scale. No single instructor can tailor a course to every student's pace and learning needs. AI can.
Syracuse distributed those 30,000 licenses to address an access problem. Some students had already purchased AI tools independently. Others could not afford them. Faculty and staff needed a secure environment for uploading sensitive documents without routing data through commercial platforms.
The university also built a private wireless network in partnership with JMA Wireless. Thermal sensors in academic buildings detect occupancy without capturing identifying information. The data allows Syracuse to optimize janitorial services, plan building capacity, and eventually adjust heating and cooling based on actual use patterns.
Government Takes a Measured Approach
Moy described a different calculus. Government agencies hold critical information-Medicaid data, health records, testing results. The stakes of stewardship are high, which means caution is not a weakness but a requirement.
Her office manages roughly 30 million square feet of state real estate, oversees 1,500 procurement contracts valued at $44 billion, and administers a design and construction portfolio of approximately $5.7 billion. The agency's AI strategy identifies low-risk, high-value applications first, then builds the data infrastructure and legal frameworks before scaling.
One concrete application: AI-assisted search through the state's contract catalog. Agencies and municipalities often struggle to find what they need, defeating the efficiency those contracts are meant to provide. The approach is low-risk, creates no job displacement, and offers a testbed for what the technology can accomplish.
Document summarization tools for bid documents and contract histories are saving up to three hours per day, according to Moy. Backlogs present another opportunity, though she cautioned that agencies cannot distribute productivity tools without first establishing the right frameworks.
Jobs, Regulation, and Transparency
Both speakers addressed concerns about AI's impact on employment. Rubin cited research showing that less than 1% of the 1.2 million layoffs recorded in 2025 were directly tied to AI. Economic factors and structural business decisions are reshaping the workforce more than the technology itself.
Rubin said he expects AI will ultimately create more jobs than it displaces, though every job will change. Students who do not learn to incorporate AI into their discipline will be at a disadvantage.
Moy drew a parallel to the dot-com era and the transformation of publishing, which upended business models at institutions like the Brooklyn Public Library, where she once served as chief strategy officer. The fear and exuberance surrounding those transitions mirrors what society is experiencing now.
Both panelists returned repeatedly to transparency. Rubin pointed to AI companies publishing their system prompts as a model for responsible deployment. Syracuse launched an AI-powered course search tool called Clementine that similarly makes its operating parameters visible.
A student asked whether recent court rulings holding social media platforms liable for algorithmic harm to minors should set a precedent for regulating platforms like ChatGPT. Rubin was direct: "We made the mistake with social media. These companies should have an obligation to have guardrails."
Moy noted that government often lags behind rapid technological change, but intervention becomes necessary when innovation causes public harm. Governor Kathy Hochul recently launched the FutureWorks Commission to study AI's effects on the labor market.
Another student raised concerns about fraud and biased algorithms. Moy said the answer is not avoiding AI but understanding it well enough to spot misuse. "If we don't understand it, we will fall behind."
Rubin framed the detection challenge as both technological and philosophical. As AI becomes embedded in everything from autocomplete to document editing, defining what counts as "AI-generated" becomes increasingly difficult. Most content will eventually contain some AI component assisting in its creation.
Your membership also unlocks: