Responsible AI Governance at Scale in Oncology: Lessons, Tools, and a Real-World Framework for Safer Cancer Care

A cancer center developed a Responsible AI governance model to ensure safe, ethical, and effective AI use in oncology. The iLEAP framework guides AI projects from research to clinical deployment.

Published on: Jul 05, 2025
Responsible AI Governance at Scale in Oncology: Lessons, Tools, and a Real-World Framework for Safer Cancer Care

Responsible Artificial Intelligence Governance in Oncology

Artificial Intelligence (AI) is becoming a key player in healthcare, with oncology seeing increasing AI integration. Despite generic AI frameworks in healthcare, dedicated governance models specific to oncology remain scarce. This article presents insights from a Comprehensive Cancer Center’s experience in establishing a Responsible AI (RAI) governance model to oversee clinical, operational, and research AI programs in oncology.

Introduction

AI is influencing all stages of cancer care—from clinical trial matching and decision support to research applications. The FDA’s registry now lists nearly a thousand AI and Machine Learning-enabled medical devices, with a growing subset focused on oncology. Institutions are testing various AI types, including generative AI, ambient AI, and traditional machine learning models.

However, oncology presents unique challenges. AI models trained on general populations often require adaptation for oncology-specific tasks such as tumor evolution prediction, genomic decision support, pathology image analysis, and treatment toxicity forecasting. Additionally, existing disparities in cancer care highlight the risk of AI exacerbating inequities if not carefully governed.

Given these complexities, a specialized governance approach is needed to ensure AI is safe, effective, and equitable in oncology. This includes addressing data quality, talent resources, legal considerations, and ethical use.

Designing a Responsible AI Governance Framework

The governance effort began with an AI Task Force (AITF) that mapped out existing AI projects and identified key challenges:

  • Securing high-quality data for AI development
  • Access to high-performance computing
  • Building AI talent capacity
  • Establishing clear policies and procedures

Inventorying ongoing projects revealed 87 active AI initiatives across clinical, research, and operational domains. The task force set strategic priorities and proposed a partnership model for evaluating AI vendors. Increasing AI-enabled data curation was a key recommendation.

Implementing the AI Governance Committee

Following the task force, the AI Governance Committee (AIGC) was established to oversee AI lifecycle management. Their core framework, iLEAP (Legal, Ethics, Adoption, Performance), guides AI projects through defined decision gates based on their development pathway: research, home-grown builds, or acquired models.

Key tools developed include:

  • Model Information Sheet (MIS): A comprehensive “nutrition card” that registers AI models and captures anticipated adverse events.
  • Model Registry: Tracks AI models throughout their lifecycle stages.
  • Risk Assessment Tool: Balances risk factors against mitigation strategies to evaluate models.
  • Clinician Trust Evaluation: Measures healthcare providers’ confidence in AI through a validated tool.

These tools help formalize AI oversight while allowing scientific freedom during the research phase, with governance intensifying as models move towards clinical and operational deployment.

Managing an AI Model Portfolio

The committee treats AI initiatives as an active portfolio, with models advancing through iLEAP gates. Entry and exit criteria have been refined, ensuring that models meet registration, risk assessment, and review requirements before production deployment. An “Express Pass” option expedites review for models adhering to best practices and deploying on approved platforms.

Over the past year, 26 AI models and 2 ambient AI pilots were registered and monitored. Retrospective reviews of 33 clinical nomograms led to the retirement of outdated models. Demand for AI projects is increasing, with a 63% rise in project intake in 2024 compared to the previous year.

Case Studies in Governance

Two recent case studies illustrate the governance framework in action:

  • Build vs. Buy: One internally developed AI model and one acquired ambient AI pilot were reviewed, with the latter receiving approval for expanded deployment after ethical consent processes were confirmed.
  • Express Pass Usage: Models meeting the Express Pass criteria were fast-tracked, demonstrating the model’s ability to balance responsibility with efficient innovation.

Discussion and Future Directions

This governance model shows that responsible AI oversight in oncology is achievable without stifling innovation. It extends beyond existing frameworks by tailoring processes and tools to oncology-specific needs. Key elements include multidisciplinary governance, a clear intake process, risk assessment, model registries, vendor data access, integration with safety reporting, and leadership support.

Ongoing challenges include refining decision rights, managing AI-related adverse events, addressing talent shortages, fostering clinician trust, and determining which models require governance review. Future research will focus on the clinical and operational impact of AI models and governance efficacy.

While this study reflects one cancer center’s approach, it offers practical components that other institutions can adapt to establish or enhance their own RAI governance programs.

Methods Overview

The development and implementation of this governance framework followed a two-phase approach:

  • Phase 1: Design and development involved assessing AI use, setting strategic priorities, and establishing governance structures. The AI Task Force and Governance Committee were formed with multidisciplinary representation.
  • Phase 2: Implementation included creating technical support teams and refining governance tools through iterative feedback. This phase emphasized balancing AI promotion and responsible use, formalizing model intake, risk assessment, and lifecycle management.

Conclusion

Responsible AI governance in oncology requires dedicated structures, tools, and processes that address the unique needs of cancer care. The iLEAP framework, risk assessments, and model registries provide a practical foundation for managing AI models from research to clinical deployment. Institutions looking to implement or improve AI governance can benefit from adopting a pragmatic, multi-disciplinary approach focused on safety, ethics, and innovation balance.

For those interested in advancing AI knowledge in healthcare, resources like Complete AI Training offer courses tailored to AI applications and governance.