Leaders in the Loop: How Experienced Health IT Executives Build Trustworthy AI at Scale
"The goal is not to micromanage individual clinical decisions, but to govern technology and earn trust at scale." That's the core message from Ben Hilmes, CEO of Healthcare IT Leaders and a longtime health IT executive who has led large clinical, analytics and integration teams.
He argues that experienced leaders - not just any "human-in-the-loop" - must stay close to AI initiatives. Their job isn't to review every model output. It's to set the rules, context and accountability that make AI safe, useful and worthy of clinician trust.
Why "human-in-the-loop" isn't enough
Traditional human-in-the-loop often means a person spot-checks outputs. Helpful, but too narrow. Health IT leaders provide something broader: clinical context, regulatory know-how and an eye on how tools change behavior and outcomes.
That perspective keeps AI anchored to quality, safety and culture. It reduces technical friction so clinicians can focus on the moments that matter most.
Govern at scale - don't micromanage
Leaders set direction and guardrails so AI supports care without derailing workflows. Think systems, not single-use tweaks.
- Define approved use cases and risk tiers (e.g., informational vs. recommendation vs. automation).
- Set clear accountability: model owner, clinical sponsor, compliance and risk partners.
- Create feedback loops that capture real clinical impact, drift and unintended effects.
- Be transparent about capabilities and limits; make it easy to override or escalate.
- Require rigorous clinical validation pre-launch and ongoing monitoring post-launch.
- Protect workflows with fail-safes, auditability and rapid rollback paths.
Many organizations don't distrust AI because the math is broken. They distrust it because governance and transparency lag deployment. Strong leadership closes that gap and makes adoption sustainable.
For reference frameworks, see the NIST AI Risk Management Framework (NIST AI RMF) and FDA resources on AI/ML-enabled medical devices (FDA AI/ML in SaMD).
Embedded leaders make AI work locally
Healthcare delivery is local. Patient demographics, community needs, clinician preferences, legacy processes and facility constraints vary widely. A system that thrives in one hospital can stall in another.
That's why embedded leaders matter. They translate technical capability into local fit. They adapt enterprise tools to support frontline care instead of forcing teams to bend around limitations.
What embedded leaders see that others miss
- Informal workarounds clinicians rely on to keep care moving.
- Historical reasons behind "why we do it this way."
- Cultural factors that speed up or slow down adoption.
- Which physician champions can sway peers - and which workflows are too brittle to change without heavy support.
Under financial and operational pressure, this role becomes essential. It's how digital investments turn into better outcomes and higher staff satisfaction instead of shelfware.
Practical next steps for health systems
- Stand up an AI council with clinical, IT, quality, compliance and risk leadership. Give it teeth.
- Catalog AI use cases, assign risk tiers and define approval and review paths.
- Maintain a model registry with owners, data lineage, validation evidence and change history.
- Require clinical validation, bias and safety testing; set up continuous monitoring and alerting.
- Publish plain-language model cards and educate clinicians on what the tool can and cannot do.
- Collect frontline feedback (signal quality, false positives, time saved) and act on it fast.
- Pilot with embedded leaders, prove local fit, then expand methodically.
Resources for healthcare leaders
AI should clear the path for clinicians, not crowd it. Keep experienced leaders in the loop, govern at scale and let care teams focus on the human connection at the heart of healthcare.
Your membership also unlocks: