HIMSS26 Warning: Automation Complacency Is Healthcare's Next AI Risk

At HIMSS26, Ben Scharfe warned AI's next safety risk is automation complacency-small mistakes slipping through routine use. Tier risks, keep humans in the loop, and monitor.

Categorized in: AI News Healthcare
Published on: Mar 10, 2026
HIMSS26 Warning: Automation Complacency Is Healthcare's Next AI Risk

HIMSS26: Automation Complacency Is the Next Patient Safety Risk in AI-Driven Care

Healthcare is moving into a more mature phase of AI. The challenges are no longer abstract-they're showing up in daily workflows. That was the message from Ben Scharfe, executive vice president for artificial intelligence at Altera Digital Health, speaking at HIMSS26. Altera Digital Health (booth 4431) builds clinical, financial and interoperability systems for hospitals and large practices.

The quiet risk after go-live: automation complacency

Scharfe warned that the biggest risk isn't always model accuracy-it's what happens once the tools land in the workflow. Like alert fatigue, constant AI outputs can dull attention and reduce scrutiny. When that happens, small errors slip through and compound, putting patient safety at risk.

One hot spot: ambient listening for clinical notes. A single misheard word can live on in the chart, look fine for billing, and mislead the next clinician. Over time, those small inconsistencies create a chain of misinformation that's hard to unwind.

Risk isn't one-size-fits-all-and liability won't be either

The safety and legal exposure change by use case. AI that schedules visits does not carry the same weight as AI that flags a stroke risk. As AI moves closer to clinical decision-making, expect tighter oversight and shared accountability among clinicians, health systems and vendors.

The practical takeaway: treat each AI use case as its own risk class and set controls accordingly. Don't borrow guardrails from low-stakes tools and assume they'll work for high-stakes decisions.

What healthcare IT leaders can do now

  • Tier your use cases by risk. Define levels (admin, clinical support, high-stakes clinical) and set distinct review, logging and escalation standards for each.
  • Keep a human in the loop-by design. Require clinician attestation on key fields, use forced pauses on high-impact recommendations and sample outputs for weekly review.
  • Expose uncertainty and provenance. Show confidence scores, source citations and links back to the record so clinicians can quickly verify.
  • Harden ambient documentation workflows. Add structured "read-back" prompts for meds, allergies, problem list and plan. Double-check high-risk terms and dosages before signing.
  • Monitor what matters. Track override rates, near-misses, false accepts/false rejects and time-to-detect/correct. Build dashboards that make drift and error hotspots obvious.
  • Train for cognitive bias. Teach teams about automation bias and deskilling. Make "trust, but verify" the norm.
  • Set vendor guardrails. Require SLAs for accuracy by use case, advance notice of model changes, shadow-mode trials and evaluation on your data before go-live.
  • Log everything. Maintain audit trails for prompts, versions and user actions. You'll need them for QA, incidents and legal review.
  • Treat incidents like safety events. Use root-cause analysis, share learnings system-wide and fold fixes into governance.
  • Stand up multidisciplinary governance. Include clinical, safety, legal, IT, data science and frontline leaders. Review metrics monthly and retire tools that don't meet the bar.

Metrics that keep you honest

  • Error rate by use case and site, with trend lines
  • Clinician override and edit rates (rising = healthy skepticism; zero can signal blind trust)
  • Documentation accuracy on meds, problems, allergies and orders
  • Time from error creation to detection and correction
  • Near-miss and safety event reports linked to AI-assisted steps

Where to start this quarter

  • Pick two high-volume workflows (e.g., ambient notes and triage suggestions). Run a 30-day "trust-and-verify" program with sampling and attestation.
  • Instrument your AI stack for audit trails and monitoring before you scale further.
  • Publish a simple policy: risk tiers, sign-off rules, vendor requirements and incident handling. Keep it to one page per use case.

As Scharfe put it, the psychology of daily use is now the primary battleground. Comfort is the risk. Build systems that make the right level of friction the default-especially where patient harm is a possibility.

Related resources

Bottom line: AI is here, and it's useful. But without intentional friction, oversight and clear ownership, automation complacency will become a recurring safety event. Treat it now, while the fixes are still straightforward.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)