AI in Healthcare Demands Proof, Transparency, Reliable Data and Clinical Oversight

AI is streamlining pharmacy work, from prior auths to AI scribes, freeing time for patient care. Safe use demands proof, transparency, reliable data and clinical oversight.

Categorized in: AI News Healthcare
Published on: Sep 20, 2025
AI in Healthcare Demands Proof, Transparency, Reliable Data and Clinical Oversight

AI in Healthcare Demands Proof, Transparency, and Clinical Oversight

AI is gaining ground across health systems, but patient safety and trust set the bar high. Scott V. Anderson, PharmD, director of member relations and liaison for the section of pharmacy informatics and technology at the American Society of Health-System Pharmacists (ASHP), outlines what it takes to deploy AI responsibly: evidence it works, model transparency, reliable data, and strong clinical oversight.

Where AI is working today

Pharmacists see momentum in automating repetitive work that pulls them away from patients. Prior authorization processing and fax intake are being streamlined, freeing up time for clinical care.

On the clinical side, AI scribes and decision support are gaining traction. Summarizing charts, extracting relevant history, and structuring notes help pharmacists spend less time in the EHR and more time with patients.

Next up: using AI with population health data to surface high-need patients who are often missed. The aim is simple-reduce manual overhead so pharmacists can focus on interventions that move outcomes.

How AI is changing day-to-day pharmacy work

Pharmacists now get more information, faster. Tools that search clinical literature, assist with drug information, and draft communications are speeding up common tasks.

Patients are also using AI to interpret their health information. Pharmacists are stepping in to validate sources, correct misinformation, and teach patients how to judge credibility.

Importantly, pharmacy professionals beyond informatics are joining in. Evaluation, design, implementation, and performance assessment are becoming team sports-improving familiarity and catching issues early.

What counts as proof of effectiveness

Set up an interdisciplinary AI governance committee with clear authority. Establish criteria for when AI is the right tool, define success upfront, and require ongoing evaluation.

Define the clinical end goal first. Tie metrics to that goal-throughput, time-to-therapy, ADEs prevented, readmissions avoided, cost per intervention, or patient experience-so results are clear to both clinicians and executives.

Keep clinicians embedded in development and rollout. Their involvement builds trust internally and supports transparent conversations with patients about how AI is used in their care.

Transparent models and reliable data

Healthcare data comes from many sources-health systems, vendors, payers, and national datasets. Collaboration and shared standards are essential to make that data interoperable and trustworthy.

Clinicians should be able to see data sources, assess completeness, and understand limitations. Model monitoring for bias and drift is non-negotiable, with clear pathways to report issues and adjust or pause models when needed.

Strengthen your foundation with recognized frameworks and standards such as the NIST AI Risk Management Framework and HL7 FHIR for interoperability.

Clinical oversight: best practices that hold up in the real world

Train every clinician who uses an AI tool. They should know when it helps, when it fails, and how to report issues fast. Maintain a tight feedback loop with informatics teams-especially during pilots and the first weeks after go-live.

Build a downtime and rollback plan so care continues if a tool underperforms or must be pulled. The standard of care cannot depend on a single model.

Protect the clinician-patient relationship. AI should support clinical judgment, not replace it. Ethical use, strong data stewardship, and clear patient communication are the core of trust.

A quick-start checklist for health systems

  • Form an interdisciplinary AI governance committee with defined decision rights.
  • Pick problems with measurable outcomes; define metrics before implementation.
  • Pilot with guardrails, document assumptions, and set stop/go criteria.
  • Require data provenance and interoperability with existing systems.
  • Continuously monitor for bias and drift; publish your monitoring plan.
  • Keep clinicians in the loop for validation and final decisions.
  • Explain AI use to patients in plain language and document consent where applicable.
  • Prepare downtime and rollback procedures; rehearse them.
  • Establish training and competency pathways for all user roles.
  • Audit post-implementation and retire tools that do not meet clinical or safety targets.

Build AI literacy across your teams

If you are formalizing AI training for clinicians, pharmacists, and operational staff, explore practical learning paths by role: AI courses by job.

AI can help pharmacy teams work smarter and extend care. With evidence, transparency, reliable data, and firm clinical oversight, it can do so without sacrificing safety or trust.