SABCS25: AI in Breast Cancer Shifts From If to How-Validation, Therapy Selection, and Workflow Fit

At SABCS25, AI moved from hype to practice, including early signals of therapy response in breast cancer. Locked assays and head-to-head validation now set the bar.

Published on: Dec 14, 2025
SABCS25: AI in Breast Cancer Shifts From If to How-Validation, Therapy Selection, and Workflow Fit

AI-Based Tools at SABCS25: From Prognosis to Early Treatment Response

At SABCS25, AI moved from hype to practical use. The focus wasn't just outcome prediction in early breast cancer-it was also about spotting early signals of response to specific therapies.

Two very different methods-classic pathology-driven features and foundation models applied to digital slides-landed at similar performance once paired with clinical data and locked into assays. That convergence is the point. It signals maturity and raises the bar for how these tools should be judged and used.

What "locked assays" mean-and why that matters

Locked assays use fixed models and thresholds, which supports reproducibility, regulatory alignment, and clinical accountability. They're different from continuously updating systems and fit better with how labs validate tests today.

For background on software-based diagnostics and review pathways, see the FDA's overview of SaMD (Software as a Medical Device). For event context, see the official San Antonio Breast Cancer Symposium.

The shift for clinicians and tumor boards

The core question is no longer "Can AI predict?" It's "How do we validate against existing assays, where does it truly guide therapy rather than just risk, and how do we add it without extra hassle or overtreatment?"

  • Therapy impact over risk alone: Make adoption contingent on proof that the tool changes treatment decisions and improves outcomes or reduces harm.
  • Head-to-head validation: Compare against current standard assays (e.g., genomic signatures, IHC-based measures) using the same cohorts and endpoints.
  • Actionable thresholds: Predefine what score changes your decision (escalate, de-escalate, switch therapy) and document it in pathways.
  • Workflow fit: No extra clicks for pathologists or oncologists. Results must flow into the report, EHR, and tumor board notes.
  • Guardrails: Prevent overtreatment by pairing AI outputs with clinical criteria, second reads, and safety checks.
  • Equity and generalizability: Test across scanners, sites, and demographics. Monitor for drift and performance gaps.
  • Cost and time: Track turnaround, pathologist time, and per-case costs to justify use.

Where AI adds value today

  • Early therapy response signals: Especially in neoadjuvant settings where early calls can spare patients ineffective regimens.
  • De-escalation: Identify low-benefit cases to reduce toxicity and cost-if supported by prospective evidence.
  • Trial matching: Surface patients likely to respond to targeted or novel agents, accelerating enrollment with clearer rationale.

Fast validation blueprint (pragmatic and defensible)

  • Start retrospective-prospective: Lock the model, prespecify endpoints, and test on external cohorts with blinded reads.
  • Compare to existing assays: Concordance, net reclassification improvement, decision-curve analysis, and calibration.
  • Define clinical triggers: Explicit thresholds that change therapy plans; document exceptions.
  • Run a pilot: 50-100 consecutive cases, real workflow, real turnaround. Measure change in decisions and downstream effects.
  • Governance: Multidisciplinary review (pathology, oncology, stats, IT). Set policies for versioning, QA, and audit logs.
  • Patient communication: Clear language on what the test does and does not mean; incorporate into consent where applicable.

Data and workflow must-haves

  • Digital slide quality: Consistent scanning, color normalization, and QC to avoid spurious signals.
  • Structured clinical data: Stage, grade, receptor status, prior therapies-clean and mapped to standards.
  • EHR integration: Single report with assay result, threshold, confidence, and recommended next steps.
  • Oversight: Periodic re-review of discordant cases; cross-site comparisons to catch drift.

For researchers and method teams

  • Prospective utility: Go beyond AUC-show that decisions and outcomes improve, or toxicity decreases.
  • Transparent feature reporting: For pathology-driven features, specify pre-analytic variables and QC. For foundation models, detail slide preprocessing and tiling.
  • Site diversity: Multi-institution datasets with varied scanners and populations to stress-test generalizability.
  • Locked vs. updateable: If updates are planned, define change control and revalidation criteria upfront.

The takeaway: AI is ready to be judged by clinical utility, not promise. If it clarifies therapy selection and fits the workflow without adding friction, it deserves a place on the report. If not, it belongs in further study, not routine care.

If your team is building AI fluency for clinical and research use, you may find these curated learning paths helpful: AI courses by job role.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide