Joint Commission and CHAI Release Seven-Element Guidance for Responsible AI in Health Care

TJC and CHAI issued guidance on responsible health care AI, with a planned voluntary certification. Legal teams: act on governance, privacy, security, monitoring, bias, training.

Categorized in: AI News Legal
Published on: Oct 14, 2025
Joint Commission and CHAI Release Seven-Element Guidance for Responsible AI in Health Care

Joint Commission Issues Practical Guidance for Responsible AI in Health Care: What Legal Teams Need to Do Now

The Joint Commission (TJC) and the Coalition for Health AI (CHAI) released nonbinding Guidance on the Responsible Use of Artificial Intelligence in Healthcare. It sets expectations for how health care delivery organizations should adopt, oversee, and monitor AI across clinical, administrative, and operational use cases. A voluntary "Responsible Use of AI" certification is planned, which signals where regulators, accreditors, and payors may look next.

For legal teams, this is a blueprint for policy, contracting, privacy/security controls, and ongoing oversight. The scope is broad and the risks are clear: errors, opacity, data misuse, security gaps, and overreliance on AI outputs.

What the Guidance Covers

The Guidance defines "health AI tools" to include any algorithmic solutions touching direct or indirect patient care, support services, or care-relevant operations. It centers on delivery organizations but is useful across the health ecosystem. While nonbinding, it aligns with emerging risk frameworks and will inform TJC's future certification program.

The Seven Elements and Legal Takeaways

  • AI Policies and Governance Structures. Establish a cross-functional governance committee (compliance, privacy, security, IT, clinical, operations) and formal AI policies. Require periodic reporting to the board or governing body. Counsel should define authority, escalation paths, and documentation requirements.
  • Patient Privacy and Transparency. Align AI data access and use with applicable law and internal policy. Provide patient disclosures on AI's role, data use, and benefits; obtain informed consent where required. Update Notices of Privacy Practices and consent templates as needed.
  • Data Security and Data Use Protections. Enforce HIPAA compliance for all AI data flows. Use encryption, least-privilege access, regular risk assessments, and an incident response plan. Contracting should include data use agreements that limit exports, prohibit re-identification, require vendor adherence to your controls, and grant audit rights.
  • Ongoing Quality Monitoring. Monitor AI performance, outcome shifts, and drift; test against known standards. Use a risk-based approach that prioritizes tools informing or driving clinical decisions. Create internal reporting for safety signals and adverse events, and notify leadership and vendors promptly.
  • Voluntary Reporting. Enable confidential, anonymous reporting of AI safety incidents to an independent organization (e.g., a federally listed Patient Safety Organization). This can improve safety learning while protecting patient privacy and may support privilege where requirements are met.
  • Risk and Bias Assessment. Classify AI risk and document bias assessments. Validate that models are tuned to the populations served and trained on representative data. Require vendors to disclose development data characteristics, known limitations, and known bias mitigations.
  • Education and Training. Train workforce segments on proper use, limits, and risks of each tool; restrict access to need-to-use roles. Maintain a central location for model cards, SOPs, and policies so users can quickly confirm intended use and escalation contacts.

Counsel's Action Checklist

  • Adopt an AI governance charter, board reporting cadence, and an AI use policy with role-based responsibilities.
  • Inventory all AI tools; classify by risk; document intended use, data sources, and human oversight requirements.
  • Update HIPAA documentation, BAAs, and data use agreements to cover AI-specific terms: purpose limitation, de-identification, re-identification prohibitions, data residency, model training rights, audit, breach notice, and flow-down obligations.
  • Refresh patient disclosures and consent language where applicable; align with state privacy laws and specialty rules.
  • Implement model monitoring SOPs: pre-deployment validation, drift detection, change control, rollback, and adverse event reporting to leadership and vendors.
  • Stand up a confidential reporting channel and, if using a PSO, ensure contracts and workflows preserve protections.
  • Require vendor transparency: model purpose, limitations, training data summaries, updates, known biases, and performance metrics by subgroup.
  • Integrate AI risks into enterprise security: threat modeling, access controls, third-party risk management, and incident playbooks.
  • Deliver role-specific training and attestations; gate access to higher-risk tools; publish model cards and SOPs in a central repository.
  • Document everything: decisions, testing, incidents, mitigations, and outcomes for accreditation and certification readiness.

Alignment With Existing Frameworks

The Guidance fits with established resources such as the NIST AI Risk Management Framework, which offers a structure for mapping, measuring, and managing AI risk. TJC and CHAI plan to release practical playbooks to operationalize these practices; expect those to influence TJC's voluntary certification criteria.

See: NIST AI Risk Management Framework and Coalition for Health AI.

Looking Ahead

Even without new federal law, the bar for defensible AI use is rising. Organizations that adopt these seven elements now will be better positioned for accreditation reviews, payer audits, and litigation risk.

If your in-house team needs structured upskilling to support governance, vendor diligence, and monitoring, explore relevant certifications at Complete AI Training.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide