Static AI Policies Risk Stalling Healthcare Innovation, Expert Warns
Healthcare organizations relying on rigid AI governance frameworks may inadvertently block clinical adoption while increasing organizational risk, according to Julia Zarb of Blue x Blue.
The issue centers on how hospitals and health systems approach oversight of AI tools in clinical settings. Fixed policies that don't adapt to new evidence or use cases create friction that slows deployment of useful technologies.
What Healthcare Staff Actually Need
Zarb identified three core requirements for effective AI governance in healthcare:
- Clear, adaptive governance structures that evolve as tools mature and evidence accumulates
- Explainable AI systems that show clinicians how decisions are reached
- Adequate time for staff to evaluate tool outputs before integration into workflows
The third point matters most. Rushing implementation without allowing clinicians to validate results creates resistance and safety gaps. Staff need space to test outputs against their clinical judgment.
The Governance Problem
One-size-fits-all policies treat all AI tools identically, regardless of risk profile or clinical context. A low-risk administrative scheduling tool doesn't warrant the same scrutiny as a diagnostic support system.
Adaptive governance instead calibrates oversight to actual risk. It also incorporates feedback loops-when clinicians flag issues or identify better use cases, policies adjust accordingly.
For more on implementing AI effectively in healthcare settings, see our resources on AI for Healthcare and AI for Executives & Strategy.
Your membership also unlocks: