State and University Leaders Discuss AI's Role in Government and Higher Education
Two senior officials outlined how their institutions are deploying artificial intelligence in a fireside chat at Syracuse University's Maxwell School on March 26. Jeanette Moy, commissioner of the New York State Office of General Services, and Jeff Rubin, Syracuse University's chief digital officer, discussed AI adoption, workforce impact and institutional readiness.
The conversation centered on a practical question: whether public institutions can lead AI adoption responsibly rather than react to it.
AI and the Classroom
Rubin said AI has the potential to reshape higher education in ways unseen for 200 years. The traditional model-a professor lecturing while students take notes and face assessment through papers and exams-has remained largely unchanged. AI enables personalization at scale, something no instructor could do alone.
Syracuse has deployed more than 30,000 AI licenses across campus. The move addresses an equity gap: some students had already purchased AI tools independently, while others could not afford them. Faculty and staff needed a secure environment to upload sensitive documents without routing data through commercial platforms.
The university also built a private wireless network with thermal sensors in academic buildings. The sensors detect occupancy without capturing identifying information, allowing the institution to optimize janitorial services, plan building capacity and adjust heating and cooling based on actual use.
Government's Cautious Approach
Moy described the state's measured pace of technology adoption as a necessary safeguard. "I would contend that it's important that government is risk-averse," she said. The state holds sensitive information-Medicaid data, health data, testing records-that demands careful stewardship.
Her office oversees roughly 30 million square feet of state real estate, manages 1,500 procurement contracts valued at $44 billion and administers a design and construction portfolio of approximately $5.7 billion. The agency's AI strategy involves identifying low-risk, high-value applications first, then building the data infrastructure to support them.
Procurement search emerged as a logical starting point. Agencies and municipalities navigating the state's contract catalog often struggle to find what they need, undermining the efficiency those contracts provide. AI-assisted search poses low risk and no job displacement.
The agency is also piloting AI-powered document summarization tools for bid documents and contract histories, which are reported to save up to three hours per day. Backlogs present another opportunity, as they affect public agencies across the sector.
The Jobs Question
Both panelists addressed audience concerns about AI's impact on employment, a topic that gained urgency in New York following Governor Kathy Hochul's launch of the Future Works Commission to study AI's effects on the labor market.
Rubin cited research suggesting that less than 1% of the 1.2 million layoffs recorded in 2025 were directly attributable to AI. Economic factors and structural business decisions are reshaping the workforce more than the technology itself, he said. He expressed confidence that AI will ultimately create more jobs than it displaces, though every job will change.
"If you don't know how to incorporate AI into your domain and discipline, you will be at a disadvantage," Rubin said. "Students need to have the tools and the classes."
Moy recalled the dot-com era and the transformation of publishing that upended institutions like the Brooklyn Public Library, where she once served as chief strategy officer. The fear and exuberance that accompanied those transitions mirrors what society is experiencing today.
"We want to make sure that we're thinking about it ethically, that we're balancing it according to public need," she said. "And we're having active conversations about those trade-offs."
Transparency and Regulation
Both panelists emphasized transparency in AI systems. Rubin pointed to Anthropic's practice of publishing system prompts as a model for responsible deployment. Syracuse recently launched an AI-powered course search tool called Clementine that similarly makes its operating parameters visible.
Rubin raised the challenge of AI-generated media and distinguishing real content from fabricated content. A student asked whether recent court rulings holding social media platforms liable for algorithmic harm to minors set a precedent for regulating platforms like ChatGPT.
Rubin was direct: "We made the mistake with social media. These companies should have an obligation to have guardrails."
Moy pointed to Hochul's recent policy proposals targeting addictive technology, including requirements for more restrictive default settings on children's accounts. Government often lags technological change, but intervention becomes necessary when innovation results in public harm, she said.
A student raised concerns about AI's potential to enable fraud and biased algorithms. Moy emphasized that the answer isn't avoiding AI but understanding it well enough to spot misuse. "If we don't understand it, we will fall behind," she said.
Rubin framed the detection challenge as both technological and philosophical. As AI becomes embedded in autocomplete, document editing and countless other tools, defining what counts as "AI-generated" becomes increasingly difficult. "My gut is almost every piece of content out there will have some AI piece to it, assisting us," he said.
Learn more about AI for Education and AI for Government.
Your membership also unlocks: