Your new experience awaits. Try the new design now and help us make it even better

REVIEW article

Front. Oncol.

Sec. Gastrointestinal Cancers: Hepato Pancreatic Biliary Cancers

This article is part of the Research TopicThe Evolving Landscape of Gastrointestinal Oncology: Integrating Artificial Intelligence, Molecular Profiling, and Clinical TranslationView all articles

Explainable Artificial Intelligence (XAI) in Pancreatic Cancer Prediction: From Transparency to Clinical Decision-Making

Provisionally accepted
  • 1King Abdullah International Medical Research Center (KAIMRC), Riyadh, Saudi Arabia
  • 2King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
  • 3Ministry of National Guard Health Affairs, Riyadh, Saudi Arabia

The final, formatted version of the article will be published soon.

Background/Objectives: Pancreatic cancer (PC) remains among the most lethal malignancies worldwide, with a persistently low 5-year survival rate despite advances in systemic therapies and surgical innovation. Machine learning (ML) has emerged as a transformative tool for early detection, prognostic modelling, and treatment planning in PC, yet widespread clinical use is constrained by the "black box" nature of many models. Explainable artificial intelligence (XAI) offers a pathway to reconcile model accuracy with clinical trust, enabling transparent, reproducible, and clinically meaningful predictions. Methods: We reviewed literature from 2020–2025, focusing on ML-based studies in PC that incorporated or discussed XAI techniques. Methods were grouped by model architecture, data modality, and interpretability framework. We synthesized findings to evaluate the technical underpinnings, interpretability outcomes, and clinical relevance of XAI applications. Results: Across 21 studies on ML in PC, only three studies explicitly integrated XAI, primarily using SHAP and SurvSHAP. These methods helped identify key biomarkers, comorbidities, and survival predictors, while enhancing clinician trust. XAI approaches were categorized by staging (ante-hoc vs. post-hoc), compatibility (model-agnostic vs. model-specific), and scope (local vs. global explanations). Barriers to adoption included methodological instability, limited external validation, weak workflow integration, and lack of standardized evaluation. Conclusions: XAI has the potential to serve as a cornerstone for advancing transparent, trustworthy ML in PC prediction. By clarifying model reasoning, XAI enhances clinical interpretability and regulatory readiness. This review provides a technical and clinical synthesis of current XAI practices, positioning explainability as essential for translating ML innovations into actionable oncology tools.

Keywords: precision oncology, Pancreatic Cancer, Explainable artificial intelligence, machine learning, ModelInterpretability, Clinical decision support

Received: 07 Oct 2025; Accepted: 28 Nov 2025.

Copyright: © 2025 Alharbi and Alfayez. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Asma Alfayez

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.