From questions to code: AI mentors first-year med students in UNR Med's IDEA Project

UNR Med's IDEA Project uses TrialMind to teach first-years how evidence gets built. AI speeds reviews and code, while students focus on clear questions, methods, and papers.

Categorized in: AI News Science and Research
Published on: Feb 27, 2026
From questions to code: AI mentors first-year med students in UNR Med's IDEA Project

Leveraging AI for medical education: teaching first-year students to think like clinical researchers

UNR Med's Independent Data Exploration and Analysis (IDEA) Project gives first-year medical students a practical way to learn how evidence gets built and judged. Created by John Westhoff, M.D., MPH, the yearlong course pairs large language models with hands-on data analysis so students can move from a clinical question to a defensible result.

The project started with ChatGPT and Claude. It now runs on TrialMind, a platform built for research teams that UNR Med integrated directly into its curriculum.

Why this program exists

"Our students needed a stronger foundation in how clinical research actually happens," said Westhoff. The point isn't to turn every student into a career researcher. It's to make sure future physicians can read studies, spot weak methodology, and make sound decisions from the literature.

How the AI stack works in class

TrialMind's literature review tools find, screen, and synthesize studies, while its data science features assist with coding, statistical analysis, and modeling. It also supports trial design and data extraction, giving students a single workflow from question to analysis.

Originally built for clinical researchers and industry teams, its partners include Mass General Brigham, Beth Israel Lahey Health, Regeneron Pharmaceuticals, and Guardant Health. UNR Med is the first medical school to plug it straight into a required course.

AI as mentor, not shortcut

Students can describe a study in plain language and translate it into executable statistical code. The platform lowers technical barriers without removing the need to define outcomes, write clear questions, and interpret original results. In other words: less time on mechanics, more time on reasoning.

Cohort structure and workflow

Each cohort spends a full year in the IDEA Project. Students are split into 18 groups of four, hit staged milestones, and still have room to tinker. Every student builds an individual case study with TrialMind. The program is now in its third year since launching in 2023.

Outputs that matter

Students have moved projects across the finish line, including peer-reviewed publications and national presentations:

  • Trends and disparities in firearm-related mortality among U.S. children and young adults, 1999-2020
  • Geographical trends in cerebrovascular disease mortality in the United States, 1999-2020 - accepted for presentation at the American Heart Association's International Stroke Conference, with an abstract in Stroke
  • Adults 65 years and older not immune to the opioid epidemic - presented at the American Society of Anesthesiologists' national meeting

M.D./Ph.D. student Joseph Tran contributed to all three. He noted that meaningful, publishable work comes from asking the right question and applying accessible, well-defined methods.

Student perspective

Before TrialMind entered the course, Tran completed the IDEA Project and later moved into the Ph.D. phase of his training. With a computer science background from Stanford and experience in Python and R, he sees large language models lowering the barrier to statistical and computational analysis so students can focus on choices that affect patient care.

Why this matters for science and education leaders

The IDEA Project shows how AI can compress low-level tasks while raising the bar on study design, variable selection, and interpretation. It builds evidence literacy for future clinicians and a repeatable research workflow for trainees who want to publish.

Students learn to ask sharper questions, critique methods, and move from intuition to data-backed answers. That habit compounds across a career, even for those who never run a lab.

Practical takeaways if you're building something similar

  • Make the question do the heavy lifting. Require students to define outcomes, comparators, inclusion criteria, and a short analysis plan before touching code.
  • Use AI to triage the literature, then keep human judgment in screening and synthesis. Treat LLM output as drafts, not ground truth.
  • Validate every line of generated code. Lock in datasets and version control early to keep analyses reproducible.
  • Favor public health sources with stable schemas, such as CDC injury and mortality data (see CDC WISQARS).
  • Set authorship, data privacy, and citation guardrails upfront. Clarity here prevents rework later.

Further learning

If you're exploring how AI supports literature review, coding, and data analysis across research domains, see AI for Science & Research.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)