Medical educators weigh risks and benefits of AI in training at University of Miami conference

Medical educators are racing to determine when AI should enter training-too early, and students lose the ability to think without it. A study of Polish endoscopists found their skills declined after just three weeks of AI reliance.

Categorized in: AI News Education
Published on: Apr 04, 2026
Medical educators weigh risks and benefits of AI in training at University of Miami conference

Medical Schools Confront AI's Timing Problem

At the University of Miami Miller School of Medicine's Innovations in Medical Education Conference in April, 264 educators from eight countries gathered around a single tension: artificial intelligence is already embedded in medical training systems, but educators don't yet know when-or how-to use it safely.

The core risk is straightforward. Introduce AI at the wrong moment in training, and students forget how to think without it. A study of Polish endoscopists showed that after three weeks relying on AI, their performance declined once the technology was removed.

"Your trainees will use agents. It will become an inevitability," said Patrick Tighe, M.D., associate dean for AI applications at the University of Florida College of Medicine. "But if these tools are misapplied, we risk cognitive decline."

When AI Helps, and When It Harms

The question isn't whether to use AI in medical education. It's which specific task to assign to it, and at what stage of training.

New reasoning models-systems that verify each step of an output rather than just the final answer-will reshape how students learn, Tighe said. Used correctly, they can push performance beyond what educators thought possible. Used carelessly, they can atrophy the skills students need to develop.

"The key is giving AI very specific missions and understanding what we are trying to achieve," Tighe said.

Several speakers pressed beyond technical concerns to ask whether AI can enhance human skills like communication and empathy. The answer depends entirely on how the tool is designed and what educators do with its output.

"If you cannot evaluate the output, do not use it," said Nicholas Tsinoremas, Ph.D., vice provost for research computing at the University of Miami. "Education is now one of AI's most active innovation zones. This will change the way we deliver education."

Ethics Beyond Compliance Checklists

Ken Masters, Ph.D., a professor of medical informatics at Sultan Qaboos University in Oman, mapped out the stakes beyond individual classrooms. AI could evolve from a tool into a collaborator, confidant, or authority that students defer to without question.

He cited real failures: autonomous agents deleting entire email inboxes despite human intervention. He flagged emerging risks: students forming relationships with AI tutors, embodied AI agents appearing in clinical settings, and the possibility of artificial general intelligence with recursive learning capabilities.

Masters argued that medical schools must address AI ethics now, at scale, and globally. The conversation cannot stay confined to institutional ethics committees.

"The basics matter. It is right to address them now," Masters said. "AI ethics in medical education must expand to consider society-wide implications, corporate influence and the long-term human-AI relationship."

Faculty Adoption Isn't Top-Down

Integration depends less on technology than on culture. Barry Issenberg, M.D., director of the Gordon Center for Simulation and Innovation in Medical Education at the Miller School, said faculties operate at different stages of adoption-innovators, early adopters, and those just beginning.

"We're not asking faculty to blindly adopt tools, but to think critically about how to use them," Issenberg said.

Alexis Rossi Aguirre, Ph.D., director of Medbiquitous at the Association of American Medical Colleges, added that infrastructure matters as much as the tools themselves. "You can have the best AI tool, but if your infrastructure can't use it, it won't matter," she said.

Hands-On Training Moves Forward

The conference offered workshops on Prompt Engineering Courses, teaching faculty and learners to interact with AI systems intentionally and safely. Sessions covered competency-based medical education with AI, personalized learning plans, and AI-driven milestone tracking.

Live demonstrations showed participants AI platforms already in use across health professions programs. A final panel on industry partnerships-featuring Microsoft and Aidoc-debated data governance, transparency, and the risks of commercial reliance in academic settings.

For educators seeking a broader foundation, AI for Education resources address how these tools reshape training across disciplines.

The Question Remains Human

Latha Chandran, M.D., M.P.H., M.B.A., executive dean for education and policy, closed the conference with a question: How do we ensure that these technologies strengthen rather than diminish the human dimensions of medical practice?

She offered no simple answer. Instead, she emphasized that the conversations started at the conference will continue across institutions and disciplines long after the meeting ends.

The stakes are clear. Medical schools must act now-not to adopt AI wholesale, but to think deliberately about when, how, and why to use it. The alternative is to let the technology decide for them.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)