Healthcare Organizations Grapple With AI Governance Gaps
Healthcare leaders face a critical challenge: AI tools are entering clinical and operational workflows faster than organizations can formally evaluate them. At Sheppard Mullin's inaugural Healthy AI Forum on April 16, hospital executives, legal teams, and compliance officers discussed how to build governance structures, manage vendor relationships, and mitigate risk in an environment with minimal federal regulation.
The forum revealed a consistent pattern across healthcare systems. Business teams and clinicians are experimenting with AI solutions-including unapproved tools on personal devices-before legal and compliance teams can assess them. This "shadow AI" problem forces organizations to rely on education and clear usage policies rather than IT controls alone.
Governance Cannot Stay in IT
Effective AI oversight requires collaboration across legal, compliance, clinical, operational, and executive leadership. Siloing AI governance within IT or compliance functions leaves critical gaps.
Physician involvement matters most when AI tools directly affect patient care, clinical decision-making, quality metrics, or medical records. Without clinical input, governance frameworks miss the practical realities of how AI actually gets used at the bedside.
Healthcare organizations are building tiered, risk-based approval processes that categorize AI tools by risk level. Higher-risk technologies escalate to executive leadership or governing boards for oversight. This approach balances innovation with accountability rather than treating legal review as a barrier to progress.
Patient Trust Hinges on Transparency
Public skepticism about AI, combined with recurring healthcare data breaches, has made patient communication essential. Organizations cannot rely on consent forms alone.
Meaningful patient education requires explaining how AI affects their care-what decisions it informs, what data it uses, and what safeguards protect their information. This goes beyond regulatory compliance to rebuild trust.
Current privacy laws, including HIPAA, were not designed for how AI systems ingest, process, and learn from data. Healthcare organizations operate in legal gray areas. Strong internal governance, ongoing risk assessment, and workforce education become critical substitutes for regulatory clarity.
Vendor Due Diligence Extends Beyond Contracts
Legal teams now evaluate AI vendors through broader risk assessment alongside business and operational stakeholders. Traditional contract review is insufficient.
Key diligence steps include assessing data handling practices, deidentification methods, vendor qualifications, insurance coverage, and financial stability. Organizations must also pressure-test early-stage vendors with limited track records.
Healthcare systems should anticipate longer-term risks: What happens to patient data if a vendor fails? How will data be returned or destroyed? These questions require clarity before signing agreements.
Oversight does not end at contract execution. Organizations should regularly reassess vendor relationships to evaluate evolving risks, scope creep, and compliance with governance policies. Standardized protections-AI-specific security addenda, business associate agreements, and clear data provisions-help maintain consistent safeguards across vendors.
Insurance and Regulatory Pressure Building
While insurers have not yet materially changed AI-related underwriting questions, this is expected to shift. Carriers increasingly focus on governance maturity, cybersecurity safeguards, documentation practices, and enterprise oversight of AI systems.
Forward-looking healthcare organizations already document and share governance structures with insurers during renewal discussions. Demonstrating governance maturity is becoming a marker of risk readiness.
As states advance AI-related legislation, healthcare organizations have an opportunity to engage with policymakers and help shape emerging regulatory frameworks. Proactive legislative engagement now may influence future rules that reflect operational realities within health systems.
The Road Ahead
Healthcare organizations investing now in strong governance structures, rigorous vendor diligence, cross-functional collaboration, and proactive risk management will be best positioned to adopt AI responsibly while protecting patient trust, safety, and privacy.
Education and transparency matter at every level-from executive leadership and clinicians to patients and operational teams. As AI tools become embedded in clinical workflows, they will influence evolving standards of care. Healthcare systems must establish oversight and validation frameworks to ensure appropriate use.
For professionals managing AI adoption in healthcare, understanding these governance principles is essential. Learn more about AI for Healthcare and AI for Legal to stay current on emerging best practices and regulatory developments.
The next Sheppard Healthy AI Forum takes place November 12, 2026, in Washington, D.C.
Your membership also unlocks: