Keeping AI Human in Healthcare: Cybersecurity, Governance, and Real-World Lessons from Vista Clinical's CIO

AI can speed healthcare work, but people keep it safe and grounded. Ask vendors for proof, lock down data, pilot carefully, and treat AI as a co-pilot, not the final call.

Categorized in: AI News Healthcare
Published on: Feb 11, 2026
Keeping AI Human in Healthcare: Cybersecurity, Governance, and Real-World Lessons from Vista Clinical's CIO

We Get AI for Work: Preparing Real-World Healthcare Environments for an AI-Driven Future

CIO Nicholas DeMeo of Vista Clinical Diagnostics joined podcast co-hosts Eric Felsberg and Joe Lazzarotti to talk about the real work of bringing AI into healthcare. The focus: move fast with automation, but keep governance human. If you run a lab, a clinic, or a health system, this conversation maps to the issues on your desk right now.

Why AI in healthcare has to stay human-centric

AI is built by people and limited by the data it can see. It can look smart and still miss what matters in your environment. Proprietary instruments, middleware, and site-specific workflows won't be obvious to a general model.

That gap creates risk. It also creates an opportunity: pair human judgment and policy with machine speed. Use AI to surface patterns and handle volume, while people make the final calls where safety and context matter.

Vendor selection: questions that cut through the sales pitch

  • Model transparency: What model is used? Built in-house or wrapped around a public API? Prove it.
  • Data handling: Does PHI or internal data touch public models? How is data isolated, encrypted, and deleted?
  • Validation: How was accuracy tested in healthcare settings? What's the observed hallucination rate and how is it controlled?
  • Model stability: Will behavior shift with updates? How are versions governed and revalidated?
  • Security posture: What certifications, pen tests, and audits back up their claims?
  • Contract clarity: Breach notification timelines, BAAs, subprocessor lists, and data residency spelled out.

If a vendor can't answer these cleanly, keep looking. Passing your data through a generic chatbot via a hidden API key is a hard no.

Governance that people actually follow

Start with education. Most pushback fades when teams see the real risks and how policies protect their work and their patients.

Include the people closest to the workflow. If microbiology is affected, someone from that team should help shape the policy. Then formalize roles, permissions, vendor due diligence, and technical controls.

  • Step 1: Educate leaders and frontline teams on AI risks, PHI exposure, and accountability.
  • Step 2: Form a cross-functional committee (clinical, IT, security, compliance, legal, ops).
  • Step 3: Define use cases, red lines, review gates, and exception paths.
  • Step 4: Standardize vendor review, procurement, and monitoring.
  • Step 5: Pilot, measure, and iterate with clear success and safety metrics.

AI note-takers: proceed with care

Transcription and meeting summaries sound helpful. They also introduce recording risk, third-party exposure, and accuracy disputes when notes must be recalled.

If you allow them, limit where they can run, what they can capture, and where outputs can be stored. If you can't control those factors or don't have a clear need, skip them.

Compliance and policy: national baseline vs. state patchwork

Multi-state providers feel the squeeze when one state's rules affect operations elsewhere. Centralized guardrails can speed innovation; state-level nuance can protect local priorities. Both viewpoints have weight.

A practical path is to anchor programs to recognized baselines and layer state nuance where needed. Two helpful anchors:

Building the "human-AI nexus" inside your org

Keep humans on judgment, ethics, and exception handling. Let machines take volume, pattern recognition, and repeatable tasks. Train people to think, not just watch tools.

This shift is cultural. Reward teams for raising risks early, documenting decisions, and improving workflows. AI should feel like a co-pilot, not an autopilot.

A practical playbook you can run this quarter

  • Inventory: List every AI-enabled tool, shadow or sanctioned. Map data flows and PHI touchpoints.
  • Access and data controls: Turn off model training on your data where possible. Enforce DLP and logging.
  • Vendor gate: Require model provenance, validation evidence, and breach timelines before pilots.
  • Use-case tiers: Green-light low-risk automations; cordon off clinical decision support for stricter review.
  • Pilot safely: Start in non-production or with synthetic data. Measure accuracy, drift, and human override rates.
  • Close the loop: Create a fast feedback channel from clinicians to IT/security. Adjust policy with evidence.

What this means for healthcare leaders

  • AI is only as useful as the data and context you give it.
  • Transparency beats hype in vendor selection.
  • Education and inclusion are the shortest path to adoption.
  • Some "assistants" add more risk than value. Choose with intent.
  • Aim for a human-led, machine-accelerated model. Patient safety stays first.

Keep building your team's AI fluency

If you're standing up governance, training helps speed alignment across roles. Explore relevant learning paths:


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)