From Pilots to Measurable ROI: Practical AI Strategies from the HIMSS AI Leadership Summit at Any Scale
HIMSS AI Summit outlines how to drive ROI and better care: pick outcomes, baseline, and translate wins to dollars. Target scribing, imaging triage, and safer governance.
Published on: Sep 25, 2025

AI Strategy From the HIMSS Summit: How to Drive ROI and Transform Care Delivery
Healthcare executives and technology leaders convened in Chicago at the first HIMSS AI Leadership Strategy Summit with a clear mandate: use AI tools to drive measurable ROI and materially improve care delivery. The conversation centered on practical strategy, not hype-what to build, what to buy, how to govern it, and how to prove it works.
Start With Outcomes, Not Algorithms
- Define one objective per use case: reduce length of stay, cut readmissions, speed prior authorization, shorten documentation time, or improve capacity management.
- Baseline the metric before you test anything. No baseline, no ROI.
- Translate clinical or operational wins into dollars: minutes saved, units of throughput, avoided events, or revenue integrity gains.
High-ROI Use Cases Executives Prioritized
- Clinical documentation assistance and ambient scribing (reduce note time, burnout, and overtime).
- Imaging triage and queue optimization (faster reads, better throughput).
- Care coordination and discharge planning (free up beds, cut avoidable days).
- Prior authorization and denial prevention (shorter cycles, improved cash flow).
- Virtual nursing and patient outreach (close gaps, stabilize staffing).
- Capacity and staffing optimization (predict demand, right-size schedules).
Build the Data and Integration Foundation
- Integrate with the EHR and workflow tools your teams already use. If it's not in the flow, it won't be used.
- Standardize key data elements (e.g., codes, notes, device data) and map interfaces early to avoid rework later.
- Set up monitoring for data quality, prompt drift, model performance, and exceptions from day one.
Governance and Risk: Make It Safe and Compliant
- Establish an AI Council with clinical, operations, IT, security, legal, and compliance. Give it decision rights.
- Require human-in-the-loop for clinical decisions and clear escalation paths for edge cases.
- Document model purpose, data sources, evaluation results, and change logs (audit-ready).
- Use recognized frameworks for risk and bias management, such as the NIST AI Risk Management Framework and review FDA guidance for AI/ML-enabled medical devices (AI/ML SaMD).
Buy vs. Build: A Simple Decision Lens
- Buy when the task is common (scribing, summarization, chat, RCM automation), the vendor integrates cleanly with your EHR, and you can verify outcomes fast.
- Build when your workflows are unique, your data is proprietary, or the use case is core to competitive advantage.
- Hybrid often wins: buy the workflow layer, build the models or prompts that use your data.
Vendor Diligence Questions
- Show baseline and post-implementation metrics from similar customers. How were they measured?
- Total cost of ownership: licenses, tokens, integration, change management, and support-spelled out.
- Data rights: who owns outputs, fine-tuning artifacts, and logs? Is PHI used to train shared models?
- Security: BAA, access controls, red-teaming results, incident response commitments.
- Workflow proof: native EHR integration, click reduction, role-based views, and fallbacks when the model is uncertain.
Measurement That Sticks
- Adoption: eligible users, active users, and task completion rates.
- Time: minutes saved per task and per clinician per shift.
- Quality: accuracy, override rate, and outcomes (e.g., LOS, readmissions, denials).
- Financials: cost per task, net savings after licenses and support, payback period, and IRR.
Change Management Playbook
- Co-design with frontline teams. Pilot with champions, not skeptics.
- Train for the new workflow, not just the tool. Provide scripts, quick-starts, and office hours.
- Listen to feedback weekly, ship adjustments biweekly, and publish performance dashboards.
Operating Models That Scale
- Product owner per use case, accountable for adoption and outcomes.
- Shared services: data engineering, security, legal, procurement, and MLOps.
- Portfolio view: track all AI initiatives, stage gates, and resource allocation in one place.
Small vs. Large Organization Paths
- Smaller systems: pick one use case with a tight loop (e.g., scribing). Go live in 60-90 days. Prove savings, then expand.
- Large systems: run a 4-6 use case portfolio, standardize tooling, and enforce a common governance and measurement framework.
90-Day Launch Plan (Practical and Lean)
- Days 0-15: Choose the use case. Define one metric. Capture baseline. Identify a pilot site and executive sponsor.
- Days 16-45: Finalize vendor or build scope. Complete data mapping, risk review, and integration plan. Draft training materials.
- Days 46-75: Configure, integrate, and test with 10-20 users. Validate accuracy and workflow fit. Adjust.
- Days 76-90: Launch pilot to a full unit. Publish a dashboard. Decide scale/stop/iterate with hard data.
Budgeting and Funding
- Fund from the value stream that benefits (e.g., RCM budget funds prior auth automation).
- Tie vendor payments to milestones and outcomes where possible.
- Reinvest savings into the next two use cases to compound results.
What Success Looks Like
- Clear business outcomes with baselines and ongoing measurement.
- Clinicians who spend more time with patients and less time on screens.
- A repeatable process to evaluate, deploy, and scale AI safely across the enterprise.
Tags: clinical AI, strategic planning
Next Steps
- Set your first use case, baseline the metric, and book a 30-minute cross-functional kickoff.
- If you need upskilling across roles, explore focused learning paths: AI courses by job and latest AI courses.