Google's chief technologist addresses limits and potential of AI in math research at Rice lecture

Google's chief technologist told Rice University students that AI already aids math research but can't yet crack problems that have stumped humans for decades. Verifying AI-generated proofs remains slow and manual, he said.

Categorized in: AI News Science and Research
Published on: Apr 16, 2026
Google's chief technologist addresses limits and potential of AI in math research at Rice lecture

Google's Chief Technologist Questions When AI Can Exceed Human Research Capability

Prabhakar Raghavan, chief technologist at Google, delivered a lecture at Rice University this month examining how large language models can assist-and where they fall short-in mathematics and computer science research.

Speaking as part of the Ken Kennedy Institute Distinguished Lecture Series, Raghavan structured his talk around a central question: When and how can LLMs help in ways that exceed human capability?

The verification problem remains unsolved

Raghavan acknowledged a critical limitation. AI-generated proofs require careful verification, a process that remains time-intensive and manual.

"Verifying the correctness of a proof generated by AI remains a critical bottleneck," he said.

He opened with a practical example: balanced allocations in computer systems. Real-world server routing involves sudden traffic spikes, competing priorities, and incomplete information. AI systems can generate programs that explore many possible solutions to such problems-arriving at results that would be difficult for humans to produce directly.

But long-standing problems in mathematics and computer science pose a harder test. "Our goal isn't to ask if AI can help with math research-it already does," Raghavan said. "Our goal is to ask whether problems that have stood the test of time can see progress from AI, yielding results that stand the test of time."

Training data skews toward success

Raghavan raised a structural problem with how AI systems learn. Current models train largely on successful results, missing the failures that often drive human insight.

"Is mathematics about hill climbing on benchmarks, akin to a video game leaderboard? Or is it a fundamentally different human enterprise?" he asked.

He positioned LLMs as enabling tools rather than competitors. AlphaFold, which contributed to a Nobel Prize-winning breakthrough in protein structure prediction, illustrates how embedded these systems have become in scientific work.

"Banning LLMs from science is like banning microscopes from biology or telescopes from astronomy," Raghavan said.

Researchers see balanced assessment

Richard Wong, an assistant teaching professor of mathematics at Rice, said the talk struck a rare balance. "It was really insightful to hear from someone at Google who knows the state of the art … but I also appreciated that he talked about the limitations," Wong said.

Nada Ali, a graduate student in Rice's mathematics department, found value in hearing from someone directly building AI systems. "It was interesting to actually hear all of this from someone who is directly engaged in making all of these AI engines," she said.

For researchers working across disciplines, the practical question remains: how to use these tools productively while accounting for their constraints. Understanding both capabilities and failure modes matters more than enthusiasm for either.

Explore Generative AI and LLM Courses and AI Research Courses to build expertise in how these systems work and where they apply to your field.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)