AI-RAPNO Issues Practical Guidelines for Responsible AI in Pediatric Neuro-Oncology
A subcommittee of the RAPNO consortium focused on artificial intelligence (AI-RAPNO) has published a two-part policy review in The Lancet Oncology with guidance for safely bringing AI into pediatric neuro-oncology. "These recommendations offer a practical roadmap to move from promising research to safe, equitable bedside use," said Anahita Fathi Kazerooni, PhD. "As AI evolves, standardization, validation and transparency will be key to realizing personalized care for children with brain tumors."
Why this matters for leaders
Pediatric CNS tumors are not the same as adult tumors. While AI tools are progressing in adult oncology, they have not translated to pediatrics, where data, protocols, and clinical needs differ.
AI-RANO offers a model for adult neuro-oncology, but pediatrics needs its own playbook. The AI-RAPNO guidance fills that gap-improving consistency across trials, supporting safer care, and setting clear expectations for validation.
What the review covered
The subcommittee searched PubMed, MEDLINE, and Google Scholar for work published from January 2000 to December 2024 on pediatric brain tumors, AI, machine learning, radiomics, and related topics. They reviewed 125 peer-reviewed papers, meta-analyses, and systematic reviews.
Part one maps promising use cases and tools. Part two addresses benefits, challenges, and what it will take to make AI part of standard clinical practice.
Core challenges to plan for
- Inconsistent imaging protocols that drive variability in interpretation.
- Limited annotated pediatric datasets for training reliable models.
- Regulatory and ethical hurdles in clinical integration.
- Need for tight collaboration among clinicians, data teams, and regulators.
Key recommendations you can operationalize
- Standardize imaging across sites. Consistently define tumor subregions and outline areas on images to reduce interobserver variability.
- Stand up clear, clinical-grade validation frameworks for models before deployment.
- Prepare infrastructure: EHRs, clinical trial systems, and data pipelines should be ready for AI inputs and outputs.
- Use volumetric analysis when assessing irregular tumors; make it optional for others.
- Evaluate response using a full picture: clinical data, lab results, and imaging together.
- Leverage transfer learning and self-supervised learning when pediatric data are scarce; consider synthetic control groups where appropriate.
- Validate pediatric-specific tools across ages, tumor types, locations, and imaging methods. Most adult-focused tools are not suitable for pediatric use.
- Align image intensities to ensure radiomic features are reproducible.
- Insist on tools that are generalizable, transparent, and equitable-with clear safeguards and guardrails.
What this means for operations and strategy
- Governance: Define evidence thresholds for adoption, reporting standards, and model change control.
- Procurement: Require documentation on training data, pediatric validation, bias testing, and performance across scanner types and sites.
- Data strategy: Fund data annotation and multi-site data harmonization to cut variability.
- Clinical integration: Start with a narrow use case (e.g., treatment response for one tumor type) and expand once validated.
- Monitoring: Track accuracy, time-to-decision, interobserver variability, adverse event detection, and equity metrics by subgroup.
- Compliance and ethics: Engage IRB and legal early; document consent, privacy protections, and ongoing safety review.
- Change management: Train radiology, oncology, and trial teams on when to trust outputs and how to escalate uncertainties.
What the experts said
"These guidelines help clinicians understand when and how to trust AI outputs, what evidence to look for, and how to integrate tools with RAPNO-aligned decision-making," said Ali Nabavizadeh, MD.
Fast-start checklist for health system leaders
- Form a cross-functional working group (neuro-oncology, radiology, data science, ethics, legal, IT).
- Audit current imaging protocols and harmonize across sites.
- Select 2-3 high-value use cases; define success metrics and a validation plan.
- Pilot in one service line; document performance and clinician feedback.
- Create a vendor and model review checklist aligned to AI-RAPNO guidance.
- Implement ongoing monitoring and a clear rollback path if performance drifts.
If you're building team capability around AI governance and model evaluation, see practical training paths by role at Complete AI Training.
Disclosure: For full disclosures of the study authors, visit thelancet.com.
Your membership also unlocks: