Static AI policies risk blocking innovation and increasing organizational harm, Blue x Blue's Zarb warns

Rigid AI policies block clinical progress and raise operational risk, expert Julia Zarb warned at HIMSS26. Adaptive governance, explainable tools, and staff evaluation time are key to sustainable adoption.

Categorized in: AI News Healthcare
Published on: May 07, 2026
Static AI policies risk blocking innovation and increasing organizational harm, Blue x Blue's Zarb warns

Static AI Policies Risk Stalling Healthcare Innovation, Expert Warns

Healthcare organizations that lock in rigid AI policies may inadvertently block clinical progress while increasing their exposure to operational risk, according to Julia Zarb of Blue x Blue. The caution came at HIMSS26 in May.

The problem lies in treating AI governance as a one-time decision. Healthcare systems that adopt fixed rules without room for adjustment struggle to keep pace with how clinicians actually use these tools-and how the tools themselves improve.

What Healthcare Staff Actually Need

Zarb identified three core requirements for sustainable AI adoption in clinical settings:

  • Clear governance structures that can adapt as tools and use cases evolve
  • Explainable AI systems that show clinicians how outputs were generated
  • Sufficient time for staff to properly evaluate tool results before implementation

Without these elements, healthcare workers operate in a bind. They're expected to use AI systems they don't fully understand, within policies that don't account for real-world clinical workflows.

The Governance Gap

Static policies create a false choice between safety and speed. Organizations either lock down AI use to the point where it becomes impractical, or they bypass governance entirely and accept unknown risks.

Adaptive governance avoids this trap. It establishes clear principles and decision-making processes that allow policies to evolve without losing oversight. Clinical teams get the flexibility to test and refine AI tools while maintaining accountability.

Explainability matters equally. When a clinician can't trace how an AI system reached a particular recommendation, they can't validate whether that recommendation makes sense for their patient. This creates a trust gap that no amount of policy language can bridge.

For healthcare organizations implementing clinical AI, the message is direct: build governance systems that anticipate change, invest in tools that show their reasoning, and give your clinical staff the time they need to evaluate outputs critically.

Learn more about AI for Healthcare and AI for Executives & Strategy to strengthen your organization's approach.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)