Flora, Khattak and Juneja examine accountability gaps in healthcare AI on podcast episode 111

Healthcare AI is outpacing the legal rules meant to govern it, leaving hospitals unclear on who bears liability when systems fail. A new podcast episode examines vendor accountability, compliance gaps, and reimbursement risks.

Categorized in: AI News Healthcare
Published on: May 13, 2026
Flora, Khattak and Juneja examine accountability gaps in healthcare AI on podcast episode 111

Healthcare Leaders Face Accountability Gap in AI Deployment

Most healthcare organizations implementing AI focus on whether the technology works. A new episode of "AI and Healthcare" shifts that question: who is responsible when it fails?

The podcast episode, featuring Douglas Flora (Executive Medical Director at Yung Family Cancer Center and President-Elect of the Association of Cancer Care Centers), Akifa Khattak, and Sanjay Juneja, examines the operational and legal gaps that have opened between AI deployment and the rules governing it.

The Core Problem: Rules Lag Behind Deployment

Healthcare organizations are building AI systems faster than the legal and operational frameworks to govern them. That gap creates real consequences for hospitals, clinicians, and patients.

The episode addresses questions most AI conversations avoid:

  • Vendor accountability when systems fail
  • Data infrastructure requirements and environmental costs
  • Compliance in agentic workflows-systems that make decisions autonomously
  • Reimbursement when AI-driven decisions drive treatment outcomes

These aren't theoretical concerns. They determine whether a hospital can recover damages from a vendor, whether clinicians face liability for an AI recommendation, and whether insurance covers treatments based on algorithmic guidance.

What's at Stake

Vendor accountability remains murky. If an AI system recommends a treatment plan that harms a patient, does liability fall on the vendor, the hospital, or the clinician who followed the recommendation?

Data infrastructure questions affect both performance and compliance. Poor data governance can make AI systems unreliable. It also creates audit trails that regulators and lawyers will examine.

Agentic workflows-where AI systems make decisions with minimal human review-raise compliance risks that existing healthcare regulations don't fully address.

Reimbursement based on AI-driven outcomes introduces another layer: payers need confidence in the systems driving clinical decisions, but that confidence doesn't yet have a standard framework.

Where to Start

Healthcare leaders implementing AI for Healthcare should treat accountability architecture as seriously as clinical validation. Legal and operational teams need a seat at the table during deployment, not after problems emerge.

Understanding Generative AI and LLM capabilities also matters-these systems power many agentic workflows that create compliance gaps.

The podcast has reached over 86,000 subscribers across 120 countries with nearly five million downloads. The episode is available biweekly.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)