AI and Healthcare: What Matters Now
Artificial intelligence is threaded through clinical work, operations and supply chains. It's forcing tech and medicine to meet in the middle - with clear demands on transparency, safety and results.
At the Arizona Business and Health Summit 2025, leaders from academia, health systems and industry put the stakes on the table. "AI is challenging healthcare, and healthcare is challenging AI," said Eugene Schneller of ASU. That tension is healthy, and it's pushing the field to get specific about value, oversight and accountability.
From copilots to agents
We're moving from AI as a supportive tool to systems that act more independently - what Schneller called "agentic AI." That raises hard questions: where does AI sit in the division of labor, and how does it change clinician and administrator autonomy?
The answer will vary by workflow. But one constant is clear: human-in-the-loop isn't going away - it's getting more strategic.
Clinical reality: less admin, more care
Primary care is weighed down by referrals, prior authorizations and routine documentation. Priya Radhakrishnan noted that much of this work doesn't require physician judgment and can be automated with guardrails. Freeing up those minutes compounds into more time with patients and fewer after-hours clicks.
Start small: triage inboxes, draft notes, pre-fill forms, pre-check coverage. Keep a review step. Track time saved and reallocate it to direct care.
Workforce shifts: from users to supervisors
Keynote speaker Susan Feng Lu highlighted a broader trend: entry-level postings are dropping in roles most exposed to generative AI. The takeaway for healthcare isn't panic - it's planning. Roles will skew toward oversight, quality assurance and exception handling.
Expect clinicians to supervise algorithms, manage safety and document accountability. That calls for training on bias, prompt design, error patterns and escalation procedures, not just button-clicking.
Implementation takes time - and structure
Rolling AI into live systems isn't a weekend project. Lu emphasized that deployment typically takes 12-24 months because real impact requires changes to governance, incentives and culture. Tools are the easy part. Policy, workflow redesign and measurement are the work.
If you're not resourcing change management, you're not resourcing AI.
Outcomes pressure and responsible use
"Health outcomes in this country are frankly quite poor," said Sherine Gabriel of ASU Health, citing metrics like maternal and premature mortality. That pressure should focus AI efforts on interventions that move the needle, not vanity pilots.
Responsible AI isn't a slogan; it's a checklist: data provenance, bias testing, audit trails, model monitoring and clear patient communication. For framing, see the NIST AI Risk Management Framework here. For context on maternal mortality in the U.S., the CDC overview is here.
Trust is earned, not assumed
Trust in medicine runs in cycles. Maria Manriquez pointed out how fast-changing information during the pandemic frayed public confidence. Rebuilding it means meeting people where they are - including their social feeds - with clear explanations and consistent follow-through.
Think algorithmically about outreach: short, factual updates; plain language; repeat across channels; address common misconceptions; cite sources; show your work.
Operations and supply chain: speed with guardrails
Dan Hopkins highlighted a shift toward "friend-shoring" and strategic trade - and the messy data and fragmented workflows that follow. Today, AI mostly surfaces insights so a human can act faster and smarter. That's the right balance when stakes are high.
Bindiya Vakil showed how AI combined with 3D printing can tighten inventory control and response times. With pre-approved rules, systems can trigger actions while teams sleep, then hand off to humans for exceptions.
What to do next: practical steps for healthcare leaders
- Pick use cases with measurable ROI: prior auth prep, referrals, documentation drafting, patient FAQs, no-show risk flags, supply risk alerts.
- Stand up governance: create an AI review board, define approval tiers, require bias tests, set monitoring thresholds and incident playbooks.
- Map the division of labor: document what the model proposes, what the human approves and how accountability is recorded.
- Fix data upstream: standardize codes, clean reference tables and enable secure data access. Bad inputs erase gains.
- Train for oversight skills: teach verification habits, prompt patterns, red flags and escalation. Make it part of CME and onboarding.
- Pilot, measure, expand: 60-90 day sprints with clear metrics (time saved, error rate, patient satisfaction). Scale only after a post-mortem.
- Communicate with patients: disclose where AI is used, how it's checked and how to opt out when possible.
- Protect clinicians' time: reinvest time savings into patient care, not more meetings. Make the benefit visible.
Signals to watch
- Agentic systems that can take pre-approved actions safely.
- Regulatory guidance on transparency, documentation and monitoring.
- Labor mix changes toward supervision, QA and exception handling.
- Supply chain pivots that favor quicker local response with digital verification.
Skill up your teams
If your clinicians and managers are moving from users to supervisors, upskilling is non-negotiable. Practical courses on prompt design, oversight and workflow integration can shorten the learning curve. See curated options by job role here.
Bottom line: pair clear guardrails with focused use cases. Keep humans in charge, measure what matters and let the tech earn its place on the team.
Your membership also unlocks: