AI in US hospitals: where adoption clusters-and what to do next (2026 baseline)
Published: 15 January 2026
Hospitals across the US are rolling out predictive AI unevenly. Analysis of 3,560 hospitals shows clear clusters of adoption, concentrated in metro areas and specific regions, with noticeable gaps where care needs are highest.
This is a baseline snapshot from 2023-2024. Use it to set priorities, guide investment, and avoid widening disparities.
What the data covered
The study integrated the 2023-2024 AHA IT Supplement, hospital characteristics, community need indicators (ADI, SVI, HPSA, MUA), and CMS hospital quality metrics (2022-2025). It examined where predictive AI is implemented, what predicts adoption, and how patterns differ by region.
The state of play
- Almost half of surveyed hospitals reported using AI-based predictive models; close to a third reported no predictive models at all.
- Adoption is clustered into hotspots and coldspots, with strong metro concentration that partly mirrors where hospitals already cluster.
- Regional variation is substantial. The South Atlantic shows the highest adoption; the West South Central the lowest.
- Evaluation is thin. Many hospitals reported no assessment of model accuracy or bias.
Who gets left behind
Hospitals serving areas with provider shortages or medical underservice (HPSA, MUA) were less likely to adopt predictive AI. Mental health shortage areas showed the steepest gap.
Socioeconomic measures (ADI and SVI) were mixed. Some SVI themes (minority status; housing/transportation) showed near-parity or slightly higher adoption, but access-related shortages consistently lagged.
What drives adoption (and what slows it)
- Interoperability is the strongest predictor. Higher "core" interoperability scores correlate with higher adoption across regions.
- Exchange friction holds hospitals back. Higher barriers to information exchange correlate with lower adoption.
- System membership helps. Being part of a health system, especially with more centralized delivery, is associated with adoption.
- Scale matters. Larger bed capacity correlates with higher adoption.
Important nuance: interoperability can reflect EHR vendor capabilities and system alignment, not just technical maturity. Keep that in mind when interpreting "readiness."
Why one-size policies miss the mark
Geographically weighted regression showed that drivers of adoption vary by region. Interoperability is consistently important, but other factors flip positive or negative depending on local context and institutional characteristics.
Practical moves for healthcare leaders
- Upgrade the data plumbing first. Prioritize interoperability (interfaces, exchange agreements, standardized data models). Reduce friction across facilities and partners.
- Pick use cases tied to measurable outcomes. Start with high-volume, high-cost, high-variance areas (e.g., deterioration prediction, throughput, care coordination). Define success metrics upfront.
- Stand up evaluation and monitoring. Track calibration, drift, subgroup performance, clinician override rates, and operational impact. Document decision pathways and governance.
- Plan for equity from day one. Include HPSA/MUA facilities in pilots with resourcing (training, integration support, data quality work). Audit bias routinely and adjust workflows, not just models.
- Demand transparency from vendors. Ask for intended use, training data representativeness, external validation, segment performance, monitoring hooks, and rollback plans.
- Leverage system scale. Centralize model governance, validation, and deployment pipelines where possible; localize workflow integration to each site's reality.
- Build staff capability. Clinical leaders, quality teams, and IT need a shared language for model selection, evaluation, and change management. If you need structured upskilling by role, see AI courses by job.
Metrics to track each quarter
- Adoption: % of service lines using predictive models; active user counts; alert acceptance/override.
- Evaluation coverage: % of models with recent calibration, bias, and drift checks; time to remediation.
- Equity: subgroup performance gaps; access metrics for HPSA/MUA sites; resource allocation to close gaps.
- Clinical outcomes linked to the specific use case (e.g., excess acute-care days after discharge, hospital-acquired conditions, sepsis bundle timing).
- Operational impact: throughput, length of stay, capacity utilization, ED boarding time.
- Safety: incident reports related to model use, near misses, override rationales.
Policy and payer implications
- Incentivize evaluation, not just adoption. Tie funding to evidence of monitoring, equity audits, and safe integration practices.
- Support grants and technical assistance for shortage areas to close resource gaps that block implementation.
- Encourage transparency: minimum reporting on intended use, validation, monitoring, and decommissioning criteria.
What we still don't know
- Timing: many datasets can't pinpoint when a model went live, which complicates outcome analyses.
- Use-case specificity: current surveys capture "predictive models" broadly, making it hard to tie outcomes to a particular model.
- Generative AI: limited inclusion in 2024; not yet enough signal to assess impact.
- Causality: early associations may reflect institutional capacity rather than AI effects.
Bottom line
Predictive AI adoption in US hospitals is clustered and uneven. Interoperability and system structure matter, but local context shapes everything else.
If you lead operations, quality, or IT, focus on the plumbing (data exchange), proof (evaluation), and people (workflow and equity). Move in steps, measure relentlessly, and resource the facilities that need it most.
Useful resources
Your membership also unlocks: