Designing human-in-the-loop healthcare AI that stands up under pressure

Julia Zarb says HITL in healthcare AI often collapses into a click. At HIMSS26, she'll push leaders to design workflows that give humans time, evidence, and clear decision rights.

Categorized in: AI News Healthcare
Published on: Jan 14, 2026
Designing human-in-the-loop healthcare AI that stands up under pressure

"Wait, I'm the Human in the Loop?" Julia Zarb previews her HIMSS26 talk on practical HITL in healthcare AI

Human-in-the-loop (HITL) has become a checkbox in healthcare AI - mentioned in policies, governance docs and deployment plans. Julia Zarb, principal and founder of Blue x Blue, argues that what HITL promises on paper and what people can do in workflow are drifting apart.

As AI touches care, claims and compliance, the moment of decision is getting murky. Who owns the call? What constraints are visible? What gets recorded when decisions are questioned? Zarb will take this head-on at the 2026 HIMSS Global Health Conference & Exposition in March.

Where HITL breaks in practice

Most HITL language assumes a person will review, accept, modify or reject an AI-influenced recommendation. That sounds safe - until you zoom into the real environment. Clinicians, nurses and managers often face time pressure, partial context and limited visibility into policy constraints or model rationale.

Under those conditions, the "review" step can degrade into a click, not a decision. Organizations end up with inconsistent acceptance criteria across teams and a scattered trail of why a decision was made - stuck in emails, chat threads or memory. When audits, denials or adverse events hit, basic questions become hard to answer: who decided what, based on which evidence and under which constraints.

Why this matters now

Surveys already suggest many physicians expect to be held accountable for AI-related errors, even if they had no role in selecting or configuring the system. The common "learned intermediary" idea - that a human absorbs responsibility by applying judgment - only works if the workflow gives that human time, evidence and clear decision rights.

If those conditions aren't present, oversight becomes symbolic. The person is "in the loop," but the loop isn't reliable, safe or consistent. The core issue isn't human presence - it's whether the workflow lets that human do the job.

A shift in the question

Zarb's key message: stop asking "Do we have HITL?" Start asking "Have we designed for the human in the loop?" That reframing pushes leaders to define the job at the decision point and build for it.

  • What is the person expected to do at that moment?
  • How much time do they have?
  • Which evidence, policies and risk thresholds are visible?
  • What is the clear path to disagree, pause or escalate?
  • How is the decision - and AI's influence - recorded for later review?

Red flags to spot immediately

  • HITL that's effectively just a click.
  • No clear decision owner at the point of action.
  • Limited visibility into constraints, policy conflicts or risk thresholds.
  • No plan to monitor model performance or drift.
  • No reliable record of how AI shaped the decision.

Quick steps for healthcare leaders

  • Define decision rights: who owns approve/deny/override, and under what conditions.
  • Expose constraints in the UI: policies, thresholds, contraindications and known exclusions.
  • Make disagreement easy: one-click routes to pause, escalate or request human-to-human review.
  • Log the "why": capture evidence cited, constraints consulted and any overrides.
  • Stand up drift watch: operational metrics, spot audits and feedback loops tied to real outcomes.

If you're aligning governance with broader industry guidance, the NIST AI Risk Management Framework is a useful reference point for risk, controls and documentation practices. See NIST AI RMF.

Session details

Session: "Wait, I'm the Human in the Loop?"

Speaker: Julia Zarb, Principal and Founder, Blue x Blue

When: Tuesday, March 10, 10:15 a.m.-11:15 a.m.

Where: Palazzo I Level 5, HIMSS26, Las Vegas

For broader event information, visit HIMSS Global Conference.

Want to level up team skills around AI workflows and governance?

Explore practical courses mapped by role to help clinicians, operations and compliance teams work with AI safely and effectively. Browse courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide