Artificial intelligence (AI) systems are now used to screen images, predict disease risk, and plan care pathways. Yet most clinical models are black boxes: they output a number or label without revealing why. This opacity limits trust, hampers regulatory approval, and makes it hard for clinicians to correct mistakes or spot bias. Explainable AI (XAI) offers tools, such as feature attribution, counterfactual examples, and rule‑based surrogates, that open the black box and show the evidence behind a prediction. Healthcare, with its high stakes and ethical duties, urgently needs rigorous research on how to design, test, and integrate such explanations. This Research Topic provides a venue to consolidate multidisciplinary advances at the intersection of AI, medicine, ethics, and human‑computer interaction.
Despite the rapid uptake of AI in radiology, pathology, genomics, and hospital logistics, few models are deployed routinely at the bedside because clinicians and patients cannot inspect or question their reasoning. This Research Topic aims to bridge this gap by collecting state‑of‑the‑art contributions that make medical AI transparent, trustworthy, and actionable. We seek studies that (1) create novel explanation algorithms tailored to structured, image, signal, or language data; (2) rigorously evaluate explanations with human experts, uncertainty metrics, or downstream clinical tasks; and (3) demonstrate how explanations can mitigate bias, support shared decision‑making, or satisfy emerging regulatory frameworks such as the EU AI Act and FDA guidance. By combining methodological innovation with real‑world evidence, the collection aims to chart best practices and open challenges for deploying XAI safely in diverse healthcare settings.
Submissions may cover algorithmic advances, empirical evaluations, human‑subject studies, or visionary perspectives related to XAI in healthcare. Themes of interest include model‑agnostic and model‑specific explanation methods; fairness, bias auditing, and accountability; interactive visual analytics; counterfactual and example‑based reasoning; natural‑language generation of clinical rationales; and user-centered evaluation frameworks. We also welcome reports on interpretable modeling approaches, domain‑specific case studies (e.g., infection control, chronic disease management, aging care), and integration of explanations into clinical workflows or medical devices.
Article types may range from Original Research, Methods, Technology and Code, and Clinical Trial Reports to Systematic Reviews, Brief Research Reports, and Perspective or Opinion pieces highlighting open challenges and future directions.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Clinical Trial
Community Case Study
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Clinical Trial
Community Case Study
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Policy and Practice Reviews
Review
Study Protocol
Systematic Review
Technology and Code
Keywords: Explainable Artificial Intelligence, Healthcare Informatics, Model Transparency, Counterfactual Explanations, Clinical Decision Support, Fairness and Bias, Human‑AI Interaction
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.