Explainable AI (XAI) in Biomedical Diagnostics

  • 1,452

    Total views and downloads

About this Research Topic

Submission deadlines

  1. Manuscript Summary Submission Deadline 20 March 2026 | Manuscript Submission Deadline 24 July 2026

  2. This Research Topic is currently accepting articles.

Background

Explainable Artificial Intelligence (XAI) is becoming a vital field in the domain of biomedical diagnostics, specifically in applications critical to public health such as cancer risk assessment. The integration of AI technologies in these areas brings about powerful tools capable of analyzing complex datasets and extracting meaningful predictions. Despite these advantages, many AI-driven solutions operate as "black boxes," obscuring the rationale behind their decisions. This lack of transparency poses a significant challenge in clinical settings, where trust, accountability, and interpretability are crucial for clinician and patient acceptance. Recent studies have made strides in demystifying these AI processes, focusing on models that balance accuracy with interpretability, but further work is needed to fully integrate these models into daily clinical practice.

The principal aim of this Research Topic is to push the boundaries in the development and application of XAI within the context of biomedical diagnostics, focusing on the application of Systems Biology to cancer detection and prediction tools. The goal is to devise or improve machine learning models that are both transparent and precise. This includes adopting existing interpretable models, such as decision trees and rule-based systems, and inventing novel methodologies that promote understanding while maintaining predictive quality. Additionally, the scope extends to the implementation of XAI techniques like SHAP and LIME across comprehensive and varied datasets, in order to solidify the role of interpretability in enhancing clinical trust and improving diagnostic outcomes.

To gather further insights in the domain of XAI application, we welcome articles addressing, but not limited to, the following themes:
o Interpretable deep learning for histopathology and radiology.
o Case studies in the application of XAI to enhance cancer screening workflows.
o Human-centric evaluations of AI explanation methods within healthcare settings.
o Examination of human-AI collaboration impacts, focusing on clinician engagement with XAI models.
o Exploration of regulatory implications and standards for the adoption of explainability in clinical AI deployments.

Potential contributions can include original research papers, reviews, and meta-analyses that advance the understanding and integration of transparent AI solutions in healthcare. These insights will collectively shape the future development of diagnostic tools that are not only effective but also align with the ethical and operational standards of the medical field.

Research Topic Research topic image

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Case Report
  • Clinical Trial
  • Community Case Study
  • Data Report
  • Editorial
  • FAIR² Data
  • FAIR² DATA Direct Submission
  • General Commentary

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Explainable Artificial Intelligence (XAI), Biomedical diagnostics, Public health, Cancer risk assessment, AI technologies, Complex datasets, Meaningful predictions, Black box, Transparency

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 1,452Topic views
View impact