From Hackathons to Published Research: U of T Schmidt AI Fellows Use Foundation Models to Speed Up Science

U of T Schmidt AI Fellows hosted 70 researchers to test domain-specific foundation models for real scientific work. One hackathon project became a paper on Toronto's cherry bloom.

Categorized in: AI News Science and Research
Published on: Feb 27, 2026
From Hackathons to Published Research: U of T Schmidt AI Fellows Use Foundation Models to Speed Up Science

U of T Schmidt AI Fellows explore how artificial intelligence can accelerate scientific discovery

Last November, four Eric and Wendy Schmidt AI in Science Postdoctoral Fellows at the University of Toronto - Ashley Dale, Biprateep Dey, Ishrath Mohamed Irshadeen and David Pellow - organized the Foundation Models for Science workshop. The goal was straightforward: test how domain-specific foundation models can speed up real scientific work through hands-on tutorials, multi-team problem-solving and shared infrastructure.

Think ChatGPT-style systems trained on massive datasets in biology, astrophysics or chemistry. These foundation models can be adapted for specialized tasks, generate useful summaries and representations, or create training signals for downstream models when labeled data is scarce.

Why this matters now

Many fields have a data bottleneck. High-quality, labeled datasets are limited, but questions keep getting harder. Foundation models help bridge that gap by transferring broad representations into narrow tasks - from protein design to sky surveys to reaction prediction.

"This workshop was quite timely as we are approaching an era of AI for science where the availability of high-quality training data has become the bottleneck, as the architectures of machine learning models are evolving rapidly," says co-organizer Mohamed Irshadeen.

What happened at the workshop

Nearly 70 researchers from Asia, Europe and North America met at the Schwartz Reisman Innovation Campus for three days. Funded by a Schmidt Sciences' Community Initiative Fund grant, the event focused on practical work: identify leading cross-disciplinary models, run tutorials, and ship prototypes through hackathons.

  • Hands-on model runs and tutorials to seed new ideas
  • Multi-team sprints that paired domain experts with AI researchers
  • Shared baselines and reproducible workflows to speed follow-up work

From hackathon project to published research

An interdisciplinary team - including Assistant Professor Joshua Speagle (Statistical Sciences; Astronomy & Astrophysics) and Schmidt AI in Science Postdoctoral Research Fellow Kevin McKinnon - turned a hackathon prototype into a workshop paper. Their project, "Predicting Cherry Blossom Peak Bloom in Toronto Through Climate-Aware Tabular Foundation Models," illustrates how cross-domain teams can turn a narrow question into a working model with real-world value.

Strengthening U of T's AI-in-science network

The workshop sits on top of major U of T initiatives - the Acceleration Consortium, the Data Sciences Institute and the Schwartz Reisman Institute for Technology and Society - and active partnerships with the Vector Institute for Artificial Intelligence. Now in its fourth year, the Schmidt AI in Science Postdoctoral Program connects fellows across departments, building a training community committed to applying AI where it counts.

"The success of this workshop - from securing a highly competitive grant to drawing an international audience - highlights a pivotal shift in how we do science. Artificial intelligence is no longer simply transforming our methods. It has become essential to the very questions we are able to ask and answer," says Lisa Strug, director of U of T's Data Sciences Institute and co-lead of the Schmidt AI in Science Postdoctoral Fellowship program.

Practical steps for researchers

  • Start with the question, not the model. Define the target variable, constraints and acceptable error bars before picking an approach.
  • Match model to data regime. With limited labels, consider prompt-based inference, parameter-efficient fine-tuning or distillation to smaller specialists.
  • Leverage cross-disciplinary pairs. Put a domain lead next to an AI lead for faster feature design, validation and error analysis.
  • Treat data as a product. Log provenance, version datasets, and document assumptions so others can reproduce and extend your work.
  • Measure what matters. Track calibration, uncertainty, out-of-distribution behavior and downstream experimental lift - not just leaderboard metrics.

Learn more and keep building

For foundations and benchmarks, see the Stanford Center for Research on Foundation Models (CRFM). For broader context on model families and applications, explore Generative AI and LLM and application-focused updates in AI for Science & Research.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)