HSCC previews 2026 AI cybersecurity guidance: a field guide for healthcare operations
The Health Sector Coordinating Council (HSCC) has released early previews of its 2026 guidance to manage AI-related cybersecurity risk. The Cybersecurity Working Group (CWG) is rolling out a phased set of resources so healthcare organizations can adopt AI responsibly without exposing patients, data, or operations.
The preview includes one-page summaries from five HSCC workstreams: education and enablement, cyber operations and defense, governance, secure-by-design medical devices, and third-party AI risk and supply chain transparency. An initial foundation-"AI in Healthcare: 10 Terms You Need to Know"-sets a shared language for teams to work from.
The AI Cybersecurity Task Group behind this effort includes 115 organizations across clinical, administrative, and financial functions. The workstreams split complex issues into focused areas while staying aligned across interdependencies.
What's coming: the five workstreams
1) Education and Enablement
- Focus: Common terminology, practical training, and clear guidance for safe AI use in live environments.
- Key areas: Definitions, fundamentals of AI/ML, risk awareness, and appropriate control measures.
- Deliverables: Top 10 AI definitions, AI-assisted learning materials (videos, infographics), and recommended training paths.
- Outcome for ops: Shared vocabulary across clinical, IT, and vendor teams-and better judgment on where AI helps and where it adds risk.
2) Cyber Operations and Defense
- Focus: Playbooks to prepare for, detect, respond to, and recover from AI-related incidents across healthcare environments.
- Objectives: Incident response and recovery patterns; AI-driven threat intelligence that fits clinical workflows; guardrails for LLMs, predictive ML, and embedded AI.
- Security priorities: AI-specific risk assessments, tailored procedures for model poisoning, data corruption, and adversarial attacks; continuous monitoring and verifiable backups.
- Deliverables: AI Cyber Resilience and Incident Recovery Playbook; AI-Driven Clinical Workflow Threat Intelligence Playbook; Cybersecurity Operations for AI Systems Playbook.
3) Governance
- Focus: End-to-end governance that integrates clinical oversight, security, and regulatory alignment.
- Controls: Map to HIPAA, FDA expectations, and the NIST AI Risk Management Framework.
- Practices: Maintain an inventory of AI systems (purpose, data dependencies, risk), apply an AI autonomy scale to set the right level of human oversight.
- Deliverable: A comprehensive guide with an AI Governance Maturity Model to assess capabilities and prioritize improvements.
4) Secure by Design (Medical)
- Focus: Build security into AI-enabled medical devices from concept to end-of-life.
- Threats addressed: Data poisoning, model manipulation, drift exploitation, and supply chain issues.
- Alignment: U.S. FDA guidance, NIST AI RMF, and CISA Secure by Design principles; transparency via AIBOM/TAIBOM.
- Deliverables: AI Secure by Design guidance, AI Security Risk Taxonomy, role-based implementation briefs, and education materials.
5) Third-Party AI Risk and Supply Chain Transparency
- Focus: Visibility, governance, and lifecycle control for external AI tools and vendors.
- Activities: Identify and track third-party AI, standardize procurement and vetting, define approval pathways, and monitor bias, privacy, and security risks.
- Contracts: Model clauses for data use, PHI handling, breach reporting; clear roles across covered entities and vendors.
- Standards: Align to NIST AI RMF, HICP, HIPAA, and global expectations (FDA, ISO, IMDRF).
What healthcare operations can do now
- Stand up an AI register: List every AI system in use or planned. Capture purpose, data sources, model type, autonomy level, PHI exposure, and owner.
- Define clinical oversight: For each AI tool, set human-in-the-loop requirements, escalation paths, and fail-safes for downtime or drift.
- Run a security baseline: Add AI risks to current assessments. Include model poisoning, data integrity, adversarial prompts, model theft, and third-party dependencies.
- Prep for incidents: Extend your IR runbooks with AI-specific detection and containment steps. Test model rollback and backup validation.
- Tighten procurement: Require AIBOM/TAIBOM, bias testing evidence, monitoring plans, and clear data-use restrictions in contracts.
- Train your teams: Give clinical, operations, and IT leads a shared glossary and scenario-based training. Start with core terms and risk patterns.
- Measure maturity: Use a simple scorecard across governance, operations, data management, and vendor risk. Reassess quarterly.
Questions to ask vendors before approval
- What data do you collect, where is it stored, and how is PHI protected?
- How do you detect and recover from model poisoning or drift?
- Do you provide an AIBOM/TAIBOM and document third-party components?
- What bias testing and human oversight controls are in place for clinical use?
- What is your incident reporting timeline and evidence requirements?
Timeline
The HSCC workstreams have made strong progress and will begin publishing guidance in succession starting in January, with documents expected through the first quarter. Share the previews across clinical, security, procurement, privacy, and legal teams so you can align fast once each playbook drops.
Useful references
Upskill the frontline
If you're building a cross-functional training plan for clinical, operations, and IT leaders, explore curated options by role to speed adoption while controlling risk.
Bottom line: Treat AI like any high-impact clinical technology. Get the inventory right, wire in governance, extend your incident playbooks, and hold vendors to clear standards. As HSCC publishes each guide, fold it into policy, training, and procurement without delay.
Your membership also unlocks: