Agentic AI in Healthcare: Balancing Workflow Efficiency with Patient Safety and Risk

Agentic AI in healthcare acts autonomously to improve workflows but requires oversight to ensure safety. Collaboration with vendors and clear strategies are essential for effective use.

Categorized in: AI News Healthcare
Published on: Jul 12, 2025
Agentic AI in Healthcare: Balancing Workflow Efficiency with Patient Safety and Risk

Agentic AI in Healthcare: Practical Insights from the HIMSS AI Forum

Agentic AI, a form of artificial intelligence that acts autonomously to make decisions and adjust actions with minimal human input, is gaining traction in healthcare. It promises to streamline clinical workflows, but careful evaluation of risks is essential to ensure safe and effective use.

Key Takeaways from the HIMSS AI Forum

At the recent HIMSS AI Forum in New York, experts discussed how agentic AI is being integrated into healthcare settings. Lyle McMillin, AVP of product management at Hyland, pointed out that these AI agents can operate independently or as part of workflows spanning multiple departments. He highlighted that over half of healthcare data is unstructured, urging organizations to develop strategies for managing both structured and unstructured data.

Dr. Lukasz Kowalczyk, gastroenterologist and CEO of Soothien HealthTech Advisory, stressed the importance of understanding how agentic AI fits within existing workflows and identifying appropriate starting points for implementation.

Jason Smith, venture fellow at Matter, advised organizations to carefully consider the reasons for moving from traditional AI to agentic AI. He recommended beginning with workflows that have low variability, such as summarization tasks or pre-operative documentation, to gradually build familiarity and trust with the technology.

Exercise Caution in Implementation

Dr. Jonah Feldman, medical director of clinical transformation and informatics at NYU Langone Health System, emphasized that physician oversight remains critical. While AI can assist, the responsibility and gravity of medical decisions require that clinicians maintain control.

Kowalczyk echoed this, noting that clinicians are accustomed to balancing risk and liability. Centralizing decision-making in AI raises questions about accountability within health systems that must be thoughtfully addressed.

McMillin reinforced the need for transparency in AI decision-making. Health systems must understand how these agents arrive at their conclusions and actions, since they effectively delegate authority to the AI.

Partnering with Vendors is Essential

The panel agreed that close collaboration with AI vendors is crucial. Given the rapid development of agentic AI technologies, no organization has all the answers yet. Working together helps clarify the technology’s capabilities and how best to incorporate it into clinical workflows.

  • Focus on clear, low-variance workflows to start integration.
  • Ensure transparency and oversight to maintain patient safety and clinician responsibility.
  • Address liability concerns around autonomous AI decision-making.
  • Collaborate closely with vendors for ongoing support and adaptation.

Healthcare professionals looking to deepen their knowledge about AI applications in clinical settings may benefit from specialized training. Explore practical AI courses tailored for healthcare roles at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)