Innovations in Cancer Imaging and Radiomics through Explainable Artificial Intelligence

  • 3,990

    Total downloads

  • 22k

    Total views and downloads

About this Research Topic

Submission deadlines

  1. Manuscript Submission Deadline 30 May 2026

  2. This Research Topic is currently accepting articles.

Background

Explainable Artificial Intelligence (XAI) is pivotal in cancer imaging and radiomics, offering transparency and interpretability to AI models often seen as "black boxes." In cancer imaging, where accurate and reliable diagnostics are essential, XAI enables clinicians to understand and trust AI-driven insights, crucial for safely integrating AI into clinical workflows. By elucidating how AI models analyze complex imaging data, XAI allows healthcare professionals to validate AI conclusions with clinical expertise, enhancing decision-making. XAI also facilitates the identification and validation of imaging biomarkers by revealing specific features contributing to AI predictions, accelerating the discovery of critical diagnostic and prognostic indicators. Furthermore, XAI helps identify and mitigate biases in AI models, ensuring that results are robust and applicable across diverse patient populations. By clarifying complex patterns in imaging data, XAI aids in personalizing cancer treatment strategies, making them more accurate and tailored to individual patient needs. The use of XAI fosters collaboration between AI systems and clinicians, improving their joint decision-making capabilities and ultimately enhancing patient outcomes. Integrating XAI in cancer imaging and radiomics marks a significant advancement towards precision medicine, offering a transformative approach to diagnosing and treating cancer with greater confidence and accountability, ensuring ethical and effective AI application in healthcare.

Despite its potential, implementing XAI in cancer imaging and radiomics faces several challenges. One major issue is the complexity of medical imaging data, which necessitates sophisticated models that can be difficult to interpret without sacrificing accuracy. Simplifying these models for explainability might lead to a trade-off with performance. Additionally, developing standardized frameworks for explainability is challenging, as different stakeholders, including clinicians, researchers, and patients, have diverse requirements for understanding AI decisions. Integrating XAI into existing clinical workflows poses another challenge, requiring extensive validation and training to ensure seamless incorporation into routine tasks. Ensuring that XAI tools are user-friendly and clinically relevant is crucial. Moreover, maintaining data privacy and security while providing interpretable insights is a significant concern, especially with sensitive medical information. Continuous research and development are needed to keep XAI tools up to date and adaptable to new advancements, requiring collaboration among technologists, clinicians, and policymakers.

This Research Topic aims to foster interdisciplinary research and collaboration between the fields of cancer imaging/radiomics and XAI, advancing our understanding of how XAI techniques can enhance the interpretability, transparency, and trustworthiness of AI-driven cancer imaging and radiomics. We welcome contributions from researchers, practitioners, and policymakers interested in the intersection of Cancer Imaging, Radiomics and AI. Potential topics of interest for this special issue include, but are not limited to:

 Developing Explainable AI Models for Early Cancer Detection in Medical Imaging
 Assessing the Trade-off Between Model Accuracy and Interpretability in Cancer Radiomics
 Standardizing Explainability Metrics for AI Systems in Cancer Imaging
 Explainable AI for Clinical Decision-Making in Personalized Cancer Treatment
 XAI for Patient Trust and Acceptance in Cancer Care
 Novel Techniques for Visualizing AI Decision-Making in Radiomics
 Data Privacy and Security in Explainable AI Models for Cancer Imaging
 XAI to Discover New Imaging Biomarkers for Cancer Prognosis
 Comparing Explainability Methods Across Different AI Architectures in Oncology
 XAI in Reducing Diagnostic Errors in Cancer Imaging
 Explainable Deep Learning for Multi-modal Cancer Data Analysis
 Integrating XAI into Existing Radiology W

Please note that manuscripts consisting solely of bioinformatics or computational analysis of public genomic or transcriptomic databases which are not accompanied by robust and relevant validation (clinical cohort or biological validation in vitro or in vivo) are out of scope for this Research Topic.

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Case Report
  • Classification
  • Clinical Trial
  • Editorial
  • FAIR² Data
  • FAIR² DATA Direct Submission
  • General Commentary
  • Hypothesis and Theory

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Cancer Imaging, AI Models for Early Cancer, XAI, Radiomics, Explainable Artificial Intelligence

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 22kTopic views
  • 15kArticle views
  • 3,990Article downloads
View impact