Smart AI Investments and Strategic Alignment in Healthcare: Live from the HIMSS AI Leadership Strategy Summit

From HIMSS' Chicago summit: leaders insisted data governance, culture, provider experience, and security matter most. ROI comes from portfolio bets, pilots, and hard metrics.

Published on: Oct 17, 2025
Smart AI Investments and Strategic Alignment in Healthcare: Live from the HIMSS AI Leadership Strategy Summit

Smart AI Investments and Strategic Alignment: Takeaways from the Chicago Summit

At the first HIMSS AI Leadership Strategy Summit in Chicago, a special hour-long HIMSSCast episode was recorded live in collaboration with the Straight Outta Health IT podcast. The conversation cut across governance, decision support, strategy, staff buy-in, provider experience, security and the ROI executives expect from AI.

Guests included healthcare strategist Christopher Kunney, Rachini Moosavi (chief analytics officer, UNC Health), and Dr. Ryan Sadeghian (system CMIO, UToledo Health), with added perspectives from Healthcare Finance News.

What executives need to hear first

  • AI without data governance is risk disguised as progress.
  • Culture and incentives drive adoption more than tools.
  • Provider experience is the gatekeeper to patient experience.
  • Models must be effective, safe and explainable to scale.
  • ROI comes from portfolio thinking, not one-off pilots.

Data governance is your control plane

Put decision rights, data quality standards and accountability in writing. Create a unified data inventory, lineage and access model that spans EHR, claims, SDOH and device data. Treat de-identification, consent and audit trails as product features, not compliance chores.

Stand up an AI governance council (clinical, operations, analytics, security, legal, compliance) with clear stage gates for use-case intake, validation and monitoring. Align practices to the NIST AI Risk Management Framework to set common language and risk controls.

Build a culture of AI (and real staff buy-in)

Adoption accelerates when the people doing the work help define the work. Recruit "clinical and operational champions," run weekly feedback loops and publish transparent metrics. Tie incentives to outcome improvements (throughput, documentation time, denial rates), not tool usage.

Give teams safe sandboxes and lightweight training. Make it easy to opt in, even easier to give feedback and impossible to ignore results.

Provider and patient experience: start where friction is highest

Target documentation burden, care coordination and prior authorization before tackling advanced use cases. Pair ambient scribe tools with clear validation steps and measure minutes saved per encounter, note accuracy and burnout indicators.

For patient-facing tools (triage, follow-ups, education), require human oversight, clear disclaimers and escalation paths. Track CSAT, wait times and resolution speed end to end.

Effective, safe and transparent models

Whether you build or buy, demand model cards, data provenance, validation results and ongoing performance reports. Test for bias across demographic cohorts and clinical settings. Keep humans in the loop where risk is non-trivial and define fallback modes for outages or confidence thresholds.

For clinical decision support and SaMD-adjacent use, align with FDA guidance for AI/ML-enabled software. Start with the agency's resources on AI/ML in medical devices and document your change-control process.

Security and data protection are non-negotiable

  • Lock down PHI with least-privilege access, encryption, and strong key management.
  • Demand BAAs, vendor risk assessments and clear data retention and deletion terms.
  • Segment retrieval-augmented generation from public data and log every query that touches PHI.
  • Continuously monitor for data leakage and model drift; automate alerts tied to clinical risk.

Where ROI shows up (and how to prove it)

Treat AI like a portfolio. Score use cases by value (clinical, operational, financial), feasibility (data readiness, workflow fit) and risk. Fund staged pilots with predefined success thresholds and kill-switches.

  • Clinical: reduced length of stay, fewer readmissions, guideline adherence.
  • Operational: shorter cycle times, improved throughput, fewer manual touches.
  • Financial: lower denials, better coding accuracy, avoided costs and capacity unlocked.

A 90-day action plan

  • Weeks 0-2: Form the AI governance council, approve a risk rubric and data access standards.
  • Weeks 3-6: Inventory AI use cases; run a value-feasibility-risk scoring workshop; pick 3 pilots.
  • Weeks 7-10: Launch pilots with baseline metrics, human oversight, model monitoring and feedback loops.
  • Weeks 11-13: Review outcomes; scale one winner, rework one, stop one; publish a one-page ROI brief.

Operating model and skills

Staff cross-functional "AI product trios" (clinical lead, operations owner, data/ML lead) with a security partner on call. Build a playbook for intake, testing, approval and post-go-live monitoring so teams don't reinvent process each time.

If your leaders and frontline teams need structured upskilling, explore role-based programs that map to governance, model risk and delivery. See a curated set of options by job role at Complete AI Training.

Context from the live episode

This special session, recorded on site in Chicago and produced alongside Straight Outta Health IT, challenged leaders to think beyond pilots to operating discipline. The shared message: align AI to enterprise goals, respect the data, bring your people with you and let measured outcomes decide what scales.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)