Explainable Artificial Intelligence in Toxicology: Building Trust in Predictive Models

  • 465

    Total views and downloads

About this Research Topic

Submission deadlines

  1. Manuscript Summary Submission Deadline 29 April 2026 | Manuscript Submission Deadline 17 August 2026

  2. This Research Topic is currently accepting articles.

Background

Artificial intelligence (AI) and machine learning (ML) hold significant promise in advancing the field of toxicology by facilitating the analysis of large datasets, uncovering complex relationships, and accurately predicting toxicological outcomes. Such advancements aim to decrease reliance on traditional animal testing, expedite the evaluation of chemical safety, and offer vital insights into the mechanisms underpinning toxicity. However, a notable challenge in implementing AI within toxicology is the inherent “black-box” nature of many sophisticated algorithms. This lack of transparency has fueled a demand for models that not only provide accurate predictions but also articulate the foundations of their results in a manner that is understandable to regulatory bodies, clinicians, and toxicologists.


The emergent field of Explainable Artificial Intelligence (XAI) seeks to address this issue by enhancing algorithmic transparency and interpretability. XAI technologies aim to illuminate the key features driving AI predictions, align outputs with known biological pathways, and ensure results are congruent with established toxicological knowledge. By providing interpretable frameworks, XAI has the potential to bridge computational predictions with mechanistic insights, promoting regulatory trust, stakeholder acceptance, and a more seamless integration of varied toxicological data from in vitro, in vivo, and in silico studies. Further, XAI offers unique opportunities to identify novel toxicity biomarkers, explore mode-of-action pathways, and support hypothesis-driven inquiries into toxicology.


This Research Topic aims to highlight interdisciplinary contributions to explore the integration of explainable AI in toxicology, accentuating methodological advances, case evaluations, and insights on enhancing predictive accuracy through explainability. Articles will spotlight strategies for constructing AI models that are not only robust and reliable but also transparent and actionable. The initiative endeavours to cultivate a community around XAI within the toxicology sector, fostering its adoption in both research and applied contexts.


We welcome contributions across diverse areas of toxicology and computational sciences, including but not limited to:

· Development and application of explainable AI models in predictive toxicology.

· Case studies applying XAI to chemical safety evaluation, drug toxicity prediction, and environmental risk assessment.

· Integration of omics, imaging, or high-throughput screening data with interpretable machine learning approaches.

· XAI for mechanistic toxicology: linking predictions to pathways, biomarkers, and adverse outcome pathways (AOPs).

· Frameworks for regulatory acceptance of AI-driven toxicological models.

· Ethical, societal, and practical challenges of adopting XAI in toxicology.

· Comparative evaluations of black-box vs. interpretable models in toxicological contexts.


By focusing on transparency, accountability, and scientific rigor, this Research Topic seeks to position explainable AI as a cornerstone of next-generation toxicology.


We welcome original research, reviews, methods, mini-reviews, and perspectives that address explainable AI in toxicology. Submissions may focus on methodological innovations, applications, or regulatory considerations.

Research Topic Research topic image

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Data Report
  • Editorial
  • FAIR² Data
  • FAIR² DATA Direct Submission
  • General Commentary
  • Hypothesis and Theory
  • Methods
  • Mini Review

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: computational toxicology, artificial intelligence, machine learning

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 465Topic views
View impact