Weill Cornell launches AI to Advance Medicine to build smart, safe AI use across its health system
Weill Cornell Medicine has rolled out AI to Advance Medicine, an enterprise-wide push to make artificial intelligence useful, safe and teachable across clinical care, research and education. The program pairs an ongoing lecture series with targeted grants to give faculty, staff and students the knowledge, funding and infrastructure they need to use AI responsibly.
Why it matters
Many clinicians and educators see the promise of AI but aren't sure when to trust it. As Weill Cornell's chief information officer, Vinay Varughese, put it, the goal is to teach people "when they can trust AI and when they should be appropriately skeptical."
This initiative sets a practical standard: build literacy, provide guardrails and fund the tools that make day-to-day work better-without compromising patient safety or academic integrity.
What the program includes
- Dean's Lecture Series: A bimonthly forum to build shared AI literacy and align projects. The first talk, "Creating an AI-Enabled Learning Health System: Now It's Personal," is delivered by Dr. Peter J. Embi of Vanderbilt University Medical Center on Feb. 23.
- Targeted grants: Seed funding and technical support for teams that need compute, cloud services or expertise to launch AI-driven research and education projects. As Varughese noted, "AI has a cost-servers, cloud resources, expertise-and that's what the grant can help provide."
- Infrastructure and services: Centralized support to evaluate tools, standardize practices and help teams deploy AI with proper governance and oversight.
- Showcasing impact: Highlighting AI efforts that enhance patient care, strengthen medical education and accelerate biomedical research.
Context: the larger trend
The program follows Weill Cornell's CARE strategic plan-clinical, AI, research and education-which guides how data science is developed and supported across the enterprise. It also builds on Cornell University's broader push to expand AI leadership, instruction and evaluation across the institution.
For health systems and academic centers, this is a workable template: unify strategy, invest in shared services, and teach AI as a skill that supports clinical judgment and scholarly work.
Practical takeaways for educators and academic leaders
- Create a shared AI literacy track: Offer short, recurring sessions for faculty, residents, students and staff with case-based examples, model limitations and hands-on practice.
- Define trust and skepticism rules: Establish review protocols, documentation standards and human-in-the-loop checkpoints aligned to recognized frameworks like the NIST AI Risk Management Framework.
- Fund starters, not just stars: Small grants for compute, secure sandboxes and expert time lower the barrier for promising pilots.
- Stand up governance early: Central evaluation of models, data use and privacy reduces duplication and curbs risk before scale.
- Measure real outcomes: Track clinical, educational and research impact. Don't stop at model accuracy-assess equity, safety and workflow fit.
- Adopt a learning health system mindset: Close the loop from data to practice to outcomes, then back to model updates. See the AHRQ overview of Learning Health Systems.
- Prioritize interdisciplinary teams: Pair clinicians and educators with data scientists, ethicists and IT to stress-test ideas.
- Engage students: Use supervised projects to build skills while contributing to real institutional needs.
On the record
"We are thinking about AI in medicine in a holistic way," said Dr. Fei Wang, associate dean for AI and data science at Weill Cornell Medicine. "This is not about a single department or a single group, but about collective institutional effort and momentum."
"AI can be overhyped, but its capabilities are increasing at an exponential pace," added Varughese. "We need a unified strategy that will collectively align and drive the AI efforts emerging across the institution."
Related learning
Your membership also unlocks: