Clinician Trust, Data Readiness, and Interoperability: Making AI Stick in Healthcare
AI is moving from pilots to practice in healthcare, but success hinges on trust, clean data, and connected systems. Monitor models, manage change, and link tools to earn adoption.

AI In The Healthcare Workplace: Trust, Data Discipline, And Interoperability
AI is moving from pilot to practice across care settings-from ambient scribing to clinical decision support. At the MedCity INVEST Digital Health conference in Dallas, a panel moderated by Keith J. Figlioli brought three leaders together to share what actually drives success on the ground: trust, data readiness, and interoperability.
Adoption Starts With Trust
Dr. Steve Miff, president and CEO of Parkland Center for Clinical Innovation, put it plainly: if staff don't trust the tool, they won't use it. That means clear evaluation frameworks, explainable outputs, and real-time oversight.
"These tools cannot be a black box," he said. "Once you open up the gate, you're going to end up with dozens of different AI models⦠we've been focusing on developing methods to monitor the performance of these models in real time." Continuous monitoring builds confidence because teams know someone is watching the models and will flag drift or failure fast.
Miff also noted a real concern among frontline staff about job loss. Expect pushback. Address it directly: define where AI assists, where humans decide, and how roles improve-not disappear.
Data And Change Management Decide Outcomes
Jess Botros, vice president of IT strategy and operations at Ardent Health, emphasized a simple goal: let clinicians spend more time with patients and equip them with the right tools. That requires tight data discipline and intentional change management.
"You have to have your house in order from a data perspective, from a trust perspective," she said. Communicate why the tool exists, how it helps, and what changes-then support the workflow shifts to make it stick.
Connect What You Have Before You Buy More
Abhinav Shashank, CEO and co-founder of Innovaccer, argued that healthcare's biggest friction-claims, value-based care transitions-comes from broken information flows. The priority: connect current systems instead of adding more disconnected software.
"Great software is going to get built all across the U.S., and what we need to work on is to create a system that connects these things and makes them really work together well," he said. Interoperability isn't optional; it is the path to measurable impact.
Practical Steps You Can Run This Quarter
- Create an AI governance board with clear standards for safety, bias checks, explainability, and escalation paths. Align to frameworks like the NIST AI Risk Management Framework.
- Stand up real-time model monitoring: data drift, performance thresholds, alerting, and rollback plans.
- Publish model "nutrition labels" (purpose, data sources, known limits, who to contact) inside the tools your teams use.
- Instrument workflows: measure time saved, documentation quality, revisit rates, and patient outcomes-not just accuracy.
- Tighten data pipelines: identity matching, data quality rules, and access controls. Require FHIR APIs and open standards; align with ONC FHIR guidance.
- Train for AI literacy by role (clinicians, rev cycle, care management). Clarify decisions AI supports vs. decisions humans own. For structured learning paths, see Complete AI Training by job.
- Start small: pick one workflow, one unit, one model. Prove value, then scale.
- Communicate early and often: what changes, why it matters, and how to give feedback inside the workflow.
The Bottom Line
Trust earns adoption. Clean data and change management make it usable. Interoperability makes it scalable. Get those three right, and AI becomes a teammate-not another tool clinicians try to work around.