AI in healthcare: From innovation to real-world impact
AI is moving from pilot projects to everyday clinical work. In a recent discussion, Wong Tien Yin of Tsinghua University, James P. Gills from the Wilmer Eye Institute of Johns Hopkins University and editor-in-chief of JAMA Ophthalmology, and Iskra Reic of AstraZeneca outlined how research, clinical practice, and industry can work together to deliver better outcomes for patients.
The message was clear: focus on problems that matter, validate at the bedside, and build for scale across systems, not just single sites.
Where AI is working today
- Early disease screening: diabetic retinopathy, glaucoma risk, lung nodules, breast lesions, skin cancer.
- Imaging triage: flagging critical CT findings, prioritizing radiology worklists, reducing turnaround time.
- Digital pathology: slide QC, mitosis detection, tumor grading support.
- EHR signals: sepsis alerts, deterioration prediction, readmission risk, medication safety checks.
- Oncology: treatment selection using multi-omics, response monitoring from imaging and labs.
- Clinical trials: faster site selection, patient matching, synthetic control arms, safety signal detection.
What changes at the bedside
- Faster, more consistent screening that catches disease earlier and reduces missed findings.
- More time for clinicians to talk with patients as documentation and admin tasks get lighter.
- Clearer treatment paths as models summarize risk and surface evidence at the point of care.
- Better access in community settings through portable imaging and autonomous/assistive tools.
Five rules for safe, effective deployment
- Data quality first: standardize inputs (FHIR, DICOM), remove label noise, and document lineage.
- Validate like a drug: external datasets, prospective studies, calibration checks, and subgroup analysis by age, sex, ethnicity, and comorbidity.
- Human oversight: clear workflows for review, override, and escalation; make the model's role explicit.
- Privacy and security: de-identification, access controls, audit trails; use federated learning where data can't move.
- Lifecycle monitoring: track drift and performance, log adverse events, and update models under change control.
Metrics that matter
- Clinical: diagnostic accuracy, time to diagnosis, complication rates, readmissions, length of stay, mortality where relevant.
- Model: sensitivity, specificity, PPV/NPV, AUC, calibration (Brier score), and performance by subgroup.
- Operational: turnaround time, clinician minutes saved per case, queue length, no-show reduction, throughput.
- Financial: total cost of care, cost per diagnosis, cost avoidance from fewer adverse events.
Ophthalmology as a blueprint
Eye care shows what good looks like: standardized image capture, well-defined disease labels, and high-throughput screening programs. Autonomous and assistive DR screening has opened access in primary care and retail clinics, while keeping ophthalmologists focused on treatment.
The same playbook applies elsewhere: control acquisition quality, define clear thresholds for action, and plug insights into existing referral and follow-up pathways.
From research to scale
- Integrate with clinical systems: EHR orders/results, PACS/VNA, and single sign-on. If it's not in the workflow, it won't be used.
- Use standard contracts: performance guarantees, real-world monitoring plans, data use terms, and exit/portability clauses.
- Co-design with clinicians: map the current workflow, pick insertion points, and pressure-test alerts to avoid fatigue.
- Plan change management: short training, quick reference guides, champions per site, and feedback loops.
- Regulatory alignment: match intended use to labeling, and maintain documentation for audits and post-market updates.
Pharma and precision care
Industry is using AI to accelerate discovery, select targets, and stratify patients. In trials, better matching and digital biomarkers can cut timelines and reduce screen failures.
Post-approval, AI helps identify responders, predict adverse events, and support value-based agreements with real-world evidence. The common thread: rigorous data pipelines and transparent methods that clinicians trust.
What to do next
- Stand up an AI governance group with clinical, data, IT, legal, and safety leads.
- Inventory data assets, map standards, and fix quality gaps before model work begins.
- Select 1-2 high-impact use cases with clear owners and measurable outcomes; run a time-boxed pilot.
- Define success metrics upfront and publish results internally, good or bad.
- Upskill teams on evaluation and oversight; a focused catalog of courses by job role can speed this (see curated AI training by job).
Further reading
The takeaway for healthcare leaders is simple: start small, measure hard outcomes, and build trust by keeping clinicians in the loop. Do that, and AI becomes another reliable tool in the clinical toolkit-useful, safe, and sustainable.
Your membership also unlocks: