Joint Commission and CHAI Set the Bar for Responsible AI in Health Care
The Joint Commission (TJC) and the Coalition for Health AI (CHAI) released guidance to help health care organizations safely integrate AI across clinical, operational, and administrative workflows. The guidance covers risks like errors, opacity, privacy and security exposure, and overreliance on automated outputs.
It is not binding, but TJC indicates a voluntary "Responsible Use of AI" certification program is coming. For operations leaders, this is a clear signal: put structure around AI now or risk safety, compliance, and trust later.
The Seven Elements, Turned Into Operations
1) AI Policies and Governance Structures
- Publish an enterprise AI policy that sets approval, usage, and accountability rules for all AI tools (clinical, operational, and administrative).
- Stand up an AI governance committee with representation from compliance, privacy, security, IT, clinical leadership, operations, and quality/safety.
- Define decision rights and a clear RACI for selection, validation, monitoring, incident response, and offboarding.
- Report AI inventory, risk posture, incidents, and outcomes to the board or governing body on a set cadence.
2) Patient Privacy and Transparency
- Set explicit rules for data access, use, retention, and disclosures that match applicable laws and organizational policy.
- Tell patients when and how AI supports their care, how their data may be used, and any material limitations.
- Secure informed consent where required by law, regulation, or organizational standard of care.
- Inform staff where AI is in use, what it does, and what it does not do.
3) Data Security and Data Use Protections
- Enforce HIPAA-compliant controls: encryption in transit/at rest, least-privilege access, audit logging, and periodic risk assessments.
- Maintain an incident response plan that covers AI-related data exposure, model misuse, and third-party breaches.
- Use data use agreements that limit purpose, minimize exports, prohibit re-identification, bind vendors to your privacy/security policies, and grant audit rights.
4) Ongoing Quality Monitoring
- Apply risk-based monitoring: prioritize tools that inform or drive clinical decisions.
- Baseline outcomes before go-live and track for drift, performance decay, and unintended effects.
- Test against known standards and validate updates before deployment.
- Route adverse events to leadership and vendors; maintain a feedback loop and corrective action process.
5) Voluntary Reporting
- Enable confidential, anonymous reporting of AI safety incidents to an independent organization (e.g., a Patient Safety Organization).
- Protect patient privacy during reporting and share learnings back into governance, risk, and training workflows.
6) Risk and Bias Assessment
- Document intended use, populations served, and clinical or operational context for each tool.
- Verify training and validation data are representative; test performance across demographic groups.
- Track bias metrics, document limitations, and set guardrails for off-label use.
7) Education and Training
- Deliver role-based training on capabilities, limits, failure modes, privacy/security obligations, and escalation paths.
- Gate access and privileges to a need-to-use basis; log and review usage.
- Publish a single source of truth for AI policies, approved tools, and support resources.
Implications for Healthcare Operations
Federal law is still incomplete in this area. Use existing frameworks to move now and reduce risk. The NIST AI Risk Management Framework and the upcoming playbooks from TJC/CHAI can anchor policy, procurement, and monitoring. For primary sources, see The Joint Commission and CHAI.
90-Day Action Plan
- Appoint an executive sponsor and form the AI governance committee.
- Inventory all AI and algorithmic tools in use or in the pipeline; classify by risk.
- Publish a system-wide AI policy and a pre-implementation checklist.
- Standardize contract language and data use agreements for AI vendors.
- Stand up monitoring: outcome dashboards, drift checks, and incident reporting.
- Launch a confidential reporting channel and link it to quality and safety operations.
- Roll out role-based training; restrict access until training is completed.
Metrics That Matter
- Tool coverage and adoption vs. policy compliance
- Clinical and operational outcome deltas vs. baseline
- Safety incidents and time-to-detection
- Performance drift rate and model update cycle time
- Bias findings by demographic group and remediation closure
- Consent completion rate and audit exceptions
Procurement and Implementation Checklist
- Intended use, risk tier, and stakeholder sign-off
- Validation evidence, bias testing, and generalizability claims
- Security review, HIPAA mapping, and vendor SOC/ISO evidence
- DUA clauses: purpose limitation, minimization, no re-ID, audit rights
- Monitoring plan, incident pathways, and decommission criteria
If you need structured upskilling for clinical and operations teams adopting AI, explore role-based options at Complete AI Training or see current AI certification paths.
Your membership also unlocks: