Clinical AI Governance Moves From Theory to Practice: TensorBlack Podcast Episode 106
Episode 106 of the AI & Healthcare by TensorBlack podcast is live, and it zeroes in on a topic every health leader needs to get right: Clinical AI governance. Douglas Flora, Executive Medical Director of the Yung Family Cancer Center at St. Elizabeth Healthcare and President-Elect of the Association of Cancer Care Centers, spotlights the discussion alongside an expert panel.
The show continues to grow-85k subscribers, nearly 5M downloads across 120+ countries-and drops new in-depth episodes every two weeks. This one digs into the real decisions clinicians, CMIOs, and service line leaders face when bringing AI into patient care.
What this episode covers
- How do we balance innovation with patient safety?
- What does effective governance look like in real clinical settings?
- Who owns accountability when AI influences diagnosis, treatment planning, or outcomes?
- How can health systems scale AI without introducing bias or unchecked risk?
Why this matters for your health system
AI is already influencing care decisions, often faster than policy can keep up. Without clear governance, you risk safety events, hidden bias, workflow friction, and vendor-driven adoption that outruns clinical validation.
Strong governance isn't an academic exercise-it's a clinical safety program with owners, metrics, and ongoing audits. If you don't define it, it will define you.
Practical actions you can implement now
- Stand up a cross-functional AI governance council (clinical, quality, legal, IT, informatics, DEI, compliance, patient safety).
- Define clear decision rights and accountability (RACI) for use case selection, approval, monitoring, and incident response.
- Inventory every AI-enabled workflow touching patients; require a clinical sponsor and a quality/safety owner for each.
- Mandate pre-deployment clinical validation against ground truth; document intended use, limits, and contraindications.
- Test for bias across subgroups; set acceptance thresholds before go-live and revisit them quarterly.
- Keep humans in the loop for high-risk decisions; require override visibility and audit trails in the EHR.
- Create a post-deployment monitoring plan: performance drift, fairness checks, error reporting, and retraining cadence.
- Require model cards or equivalent documentation from vendors; ban "black box" claims that lack clinical evidence.
- Bake safety and privacy terms into contracts: data use boundaries, recall processes, uptime, liability, and audit access.
- Train clinicians on indications, red flags, and escalation paths; measure adoption and alert fatigue.
- Communicate with patients when AI informs care decisions and how clinicians supervise its use.
Who's on the mic
Panel: Lindsey B. Cotton; Amar Rewari, MD, MBA, FASTRO; Debra Patt, MD PhD MBA; and Sanjay Juneja, M.D. Moderator and featured voice: Douglas Flora, Editor in Chief of AI in Precision Oncology and Executive Medical Director at St. Elizabeth Healthcare.
Listen and get resources
Stream Episode 106 and subscribe at TensorBlack.ai. While you're there, check out their free resources, including the TLDR Newsletter (OncoPulse) and the AI in Oncology Academy, supported by generous sponsors.
If you're building governance capability across teams, you may also find structured learning helpful. Explore role-based options here: Complete AI Training - Courses by Job.
For deeper context
- WHO: Ethics and governance of artificial intelligence for health
- FDA guidance on Clinical Decision Support software
Question for your team: What is the biggest governance challenge you face in deploying clinical AI today-and who owns fixing it?
Your membership also unlocks: