Third-Party AI Risk in Healthcare: Former CISO Rick Doten's Playbook for Vendor Oversight

AI widens vendor risk in healthcare as models, logs, and agents touch PHI. Ask hard questions, set guardrails in contracts, and plan for vendor failures to keep data use appropriate.

Categorized in: AI News Healthcare
Published on: Dec 25, 2025
Third-Party AI Risk in Healthcare: Former CISO Rick Doten's Playbook for Vendor Oversight

Getting a Tighter Grip on Third-Party AI Risk in Healthcare

Third-party risk was already a problem. AI makes it bigger. As vendors plug models and agents into workflows that touch PHI, the attack surface grows and the margin for error shrinks.

Independent consultant Rick Doten, former health plan CISO at Centene, puts it plainly: the job isn't only to protect data-it's to make sure it's used appropriately. That means digging into exactly how vendors use AI, what data they collect, and how automated agents interact with accounts and systems.

Why AI Changes Vendor Risk

Traditional vendor reviews often stop at encryption, access controls, and SOC 2. That's no longer enough. AI introduces model choices, fine-tuning practices, data retention behaviors, and autonomous actions by agents that can over-collect or over-perform.

The risks are subtle. A model might log prompts containing PHI. An agent could use broad service accounts and sweep in data it doesn't need. Even "analytics" use cases can drift into PHI processing if inputs aren't constrained.

Questions You Should Ask Every AI-Enabled Vendor

  • Models and hosting: Which models are you using (by name and version)? Are they public, private, or self-hosted? Where are they hosted and in which regions/tenants?
  • Training and retention: Do you train or fine-tune on our data? Are we fully opted out by default? How long do you retain prompts, outputs, logs, and embeddings?
  • PHI handling: Will your AI collect PHI? How do you prevent PHI from entering training sets, logs, or analytics pipelines?
  • Agents and automation: Do you use agents to call APIs, read mailboxes, or operate in EHR/claims/billing systems? Which accounts and scopes are granted, and why?
  • Access boundaries: How do you enforce least privilege for agents and services (scoped tokens, per-function permissions, time-bound access)?
  • Data segregation: How do you guarantee tenant isolation for model inputs, outputs, and vector stores?
  • Safety controls: What do you do to mitigate prompt injection, data leakage, jailbreaks, model spoofing, and retrieval poisoning?
  • Observability: What telemetry do you collect on model calls and agent actions? Can we receive logs in near real time?
  • Change management: How are model upgrades, new prompts, or agent behaviors tested and approved before production use?
  • Subprocessors: Which third parties touch our data or model traffic? How are they vetted and monitored?
  • Compliance and assurance: Do you sign a BAA? What audits, attestations, and AI-specific assessments are available?
  • Incident response: What is your notification window, evidence retention policy, and customer access to forensic data?

Guardrails to Require in Contracts and Architecture

  • Data minimization by design: Collect only what's needed; block PHI in prompts where it serves no purpose.
  • Default "no training" posture: No use of customer data for model training or fine-tuning without explicit, written approval.
  • Prompt and log hygiene: Pseudonymize or tokenize identifiers; redact before storage; set strict log retention.
  • Agent containment: Use per-task service accounts with narrow scopes, just-in-time credentials, and session recording where feasible.
  • Isolation and egress control: Enforce tenant isolation for vector databases and models; restrict outbound calls and tool access.
  • Clear model bill of materials: Document models, versions, plugins, tools, vector stores, and data paths.
  • Right to audit and test: Permit third-party assessments, red-teaming, and periodic control validation.
  • Transparent subprocessor list: Require notification and approval for material changes.
  • Incident terms that matter: 24-72 hour notification, access to logs, collaborative forensics, and defined recovery obligations.

Handling Disruptive Vendor Incidents

Don't wait for the email that says "we're investigating." Build a playbook that assumes partial vendor failure and data exposure within the same day.

  • Immediate actions: Disable vendor-integrated accounts, rotate credentials, and block suspicious IPs/domains associated with the incident.
  • Continuity: Switch to manual or read-only workflows for clinical and revenue-critical processes; have prebuilt offline procedures.
  • Forensics and evidence: Require raw logs, model/agent telemetry, and audit trails in your contract; collect your own SIEM and EDR data.
  • Communication: Pre-draft internal and patient-facing notices; clarify OCR reporting triggers with legal and privacy.
  • Recovery and review: Restore with staged access, add new detections, and reassess the vendor's AI controls before full cutover.

Why HIPAA Security Risk Analysis Is Hard

Many covered entities struggle because scope is messy. PHI moves across EHRs, portals, data lakes, shadow SaaS, and now AI pipelines and vector stores. Asset inventories age quickly, and data-flow maps are outdated the moment they're finished.

Another issue: teams try to make it perfect and never finish. The better path is to get to a defensible baseline fast, then iterate.

How to Make Risk Analysis Practical

  • Start with data flows: Identify where PHI is created, stored, transmitted, and processed-including AI logs, prompts, and embeddings.
  • Prioritize high-impact processes: Clinical care, claims, billing, patient access, and any AI-supported workflows.
  • Use a standard: Map controls to a recognizable framework and keep evidence up to date.
  • Bake in vendors: Treat critical vendors and their AI components as extensions of your environment; assess them on the same schedule.
  • Iterate quarterly: Refresh your inventory, test top controls, and review incident lessons learned.

Help for Smaller Hospitals and Clinics

If you have a small team, don't reinvent the process. Start with a handful of proven resources and tune them to your environment.

  • HHS 405(d) program for healthcare cybersecurity practices and threat-focused guidance HHS 405(d).
  • NIST AI Risk Management Framework for model-centric risk handling and governance NIST AI RMF.
  • Join a sharing community (e.g., Health-ISAC) to get indicators, playbooks, and peer benchmarks.
  • Create a 1-page vendor AI addendum and staple it to your standard security questionnaire.

A Lightweight Vendor AI Addendum You Can Use Now

  • List models, versions, hosting locations, and subprocessor roles.
  • Confirm default "no training on our data," with log retention and deletion timelines.
  • Describe PHI handling, redaction, and data classification in prompts and outputs.
  • Detail agent tools, permissions, and approval workflows.
  • Provide evidence of safety testing for prompt injection, data leakage, and retrieval poisoning.
  • Share model and agent telemetry you can deliver to our SIEM.
  • Set incident notification timelines and access to forensic evidence.

Operational Tips That Pay Off Fast

  • Block PHI in prompts by default; require explicit approval to enable it for limited use cases.
  • Route all model traffic through an LLM gateway with policy enforcement, PII scrubbing, and logging.
  • Use scoped, expiring credentials for agent actions; never let them run under shared admin accounts.
  • Establish a change control for model updates and agent behavior changes.
  • Track a simple metric: vendors with AI in scope assessed this quarter vs. total AI-capable vendors.

About Rick Doten

Rick Doten is an independent consultant and former CISO and vice president of information security at Centene Corp. He has served as a virtual CISO for international firms, sits on the Cloud Security Alliance CXO Trust Advisory Council, and serves on the boards of his local Charlotte ISC2 and CSA chapters. He advises venture and go-to-market firms on security technology and serves on advisory boards for several startups.

Skill Up Your Team

If your compliance, security, or data teams need practical AI skills to evaluate vendors and set guardrails, consider curated training that focuses on safe AI use, prompt safety, and governance. You can scan options by role here: Complete AI Training - courses by job.

Bottom Line

Ask sharper questions, lock down AI-specific behaviors, and plan for vendor failure before it happens. Do that, and third-party AI risk becomes something you can manage with confidence rather than fear.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide