Measuring AI Use in Healthcare Decisions: Development and Psychometric Validation of a 12-Item Instrument

Peer-reviewed study offers a validated 12-item tool to gauge AI use across clinical, organizational, and shared decisions. Set a baseline, pick high-ROI pilots, and track gains.

Categorized in: AI News Healthcare
Published on: Oct 22, 2025
Measuring AI Use in Healthcare Decisions: Development and Psychometric Validation of a 12-Item Instrument

Measuring AI Use in Healthcare Decision-Making: A Practical, Validated Tool You Can Apply

Healthcare runs on decisions made with incomplete information. AI can help, but most organizations don't know how to measure real-world use beyond anecdotes and pilots.

A new peer-reviewed study developed and validated a 12-item instrument that measures AI utilization across three decision-making domains in healthcare organizations: clinical, organizational, and shared decision-making. It's short, statistically sound, and built for operational use.

What the study built

The team created a concise questionnaire, refined it with experts, and validated it with healthcare staff across multiple organizations. The final tool has 12 items, evenly split across three domains: clinical decision-making, organizational decision-making, and shared decision-making.

Validation highlights: average factor loading 0.8, one principal component explaining 65.31% of variance, Cronbach's alpha 0.95, ICC 0.95. In plain terms: the items hang together well and measure a single, meaningful construct-AI utilization in decision-making.

How it was validated

  • Face validity: All items cleared the impact score threshold; wording and clarity were refined with language experts and stakeholders.
  • Content validity: Lawshe's method used; items met CVR/CVI criteria. Three overlapping items were merged to tighten the scale.
  • Construct validity: EFA confirmed a coherent structure (KMO acceptable; Bartlett's test significant). One main component captured most of the variance.
  • Reliability: Cronbach's alpha and ICC both 0.95-excellent internal consistency and stability.

What the instrument actually measures

Each domain includes four items that assess practical AI use cases:

  • Clinical decision-making: Examples include diagnostic support, improving test accuracy, identifying disease patterns, creating personalized care plans and consultations.
  • Organizational decision-making: Examples include demand forecasting, scheduling and resource allocation, financial risk management, and data-driven managerial insights.
  • Shared decision-making: Examples include patient access to information, tailored education, behavior analysis with relevant recommendations, and tools that support patient-provider decisions.

The headline finding: utilization is low

When applied across healthcare organizations in Iran, AI utilization scored low on most items. That aligns with known barriers: limited information systems, financing constraints, gaps in executive infrastructure, governance and policy challenges, public skepticism, and restricted access to equipment.

For comparison, other contexts with national programs, funding, and training pipelines report higher uptake among caregivers. The difference appears tied to policy priority, infrastructure, workforce training, and sustained investment-more than "technology" alone.

How to use this tool in your hospital or health system

  • Form a small working group: Include clinical leadership, nursing, operations, IT/IS, quality/safety, and data governance.
  • Run the 12-item checklist across departments that make frequent, high-impact decisions (ED, ICU, oncology, perioperative, scheduling, finance).
  • Score by domain: Calculate the average for clinical, organizational, and shared decision-making. Compare units and service lines.
  • Do a gap review: For low-scoring items, map barriers: data availability, workflow fit, integration, policy, training, budget.
  • Prioritize 2-3 use cases: Pick those with clear ROI and strong clinical sponsorship (e.g., imaging triage, sepsis alerts, no-show prediction, OR block utilization).
  • Build enablers: Data integration, privacy and security controls, model monitoring, and a straightforward approval path.
  • Pilot with evaluation: Track safety and performance (calibration, false alerts), operations (time, cost), outcomes, patient experience, and equity.
  • Train and socialize: Short, role-based training for clinicians, managers, and IT. Collect feedback and iterate.
  • Reassess quarterly: Re-run the instrument to see movement by domain and department. Tie results to your AI roadmap and budget cycle.

A simple scoring approach

Average the 12 items (five-point scale). Then average within each domain. Use these baselines to compare units, track progress over time, and guide investments. Avoid hard thresholds at first-focus on relative improvement and consistency.

Common barriers and straightforward fixes

  • Data quality and access: Establish data dictionaries, improve documentation, and unify identifiers. Start with one source of truth for each use case.
  • Privacy and security: Run a data protection impact assessment and set clear access rules. Limit PHI movement; use audit trails.
  • Governance: Stand up an AI review group with clinical, data, and ethics representation. Define approval, monitoring, and decommission paths.
  • Workflow fit: Integrate into the EHR and existing tools. Surface AI recommendations where decisions happen-not in a separate portal.
  • Workforce readiness: Provide short, practical training and quick-reference guides. Focus on limitations and appropriate use.
  • Procurement and contracts: Require transparency on model updates, performance, bias checks, and support SLAs.
  • Financing: Start with focused pilots that have measurable upside. Reinvest savings into the next wave of use cases.

Standards and helpful resources

Upskilling your teams

If you need role-specific training for clinicians, analysts, or managers to accelerate safe AI adoption, explore curated programs by job role:

AI courses by job role - Complete AI Training

Limitations and what's next

The instrument was validated in one country and did not measure downstream impact on outcomes. Future work should test it across regions, run confirmatory factor analysis, and build specialized instruments per decision domain for more granular insights.

The immediate opportunity: use this validated tool to set a baseline, focus your roadmap, and show measurable progress in how AI supports decisions-not just where it exists.

Bottom line

A short, validated checklist now exists to quantify AI use in clinical, organizational, and shared decision-making. Put it to work, aim for incremental gains, and let the data guide where AI actually helps your teams make better decisions.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)